00:00:00.001 Started by upstream project "autotest-nightly" build number 3795 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3175 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.166 Fetching changes from the remote Git repository 00:00:00.167 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.230 Using shallow fetch with depth 1 00:00:00.230 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.230 > git --version # timeout=10 00:00:00.284 > git --version # 'git version 2.39.2' 00:00:00.284 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.319 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.319 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.311 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.325 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.338 Checking out Revision ea7646cba2e992b05bb6a53407de7fbcf465b5c6 (FETCH_HEAD) 00:00:11.338 > git config core.sparsecheckout # timeout=10 00:00:11.351 > git read-tree -mu HEAD # timeout=10 00:00:11.370 > git checkout -f ea7646cba2e992b05bb6a53407de7fbcf465b5c6 # timeout=5 00:00:11.393 Commit message: "ansible/inventory: Fix GP16's BMC address" 00:00:11.393 > git rev-list --no-walk fcd93e2ba68418fb72075306675cd28d3d4f53d6 # timeout=10 00:00:11.516 [Pipeline] Start of Pipeline 00:00:11.531 [Pipeline] library 00:00:11.533 Loading library shm_lib@master 00:00:11.533 Library shm_lib@master is cached. Copying from home. 00:00:11.551 [Pipeline] node 00:00:11.563 Running on CYP12 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:11.564 [Pipeline] { 00:00:11.575 [Pipeline] catchError 00:00:11.576 [Pipeline] { 00:00:11.589 [Pipeline] wrap 00:00:11.600 [Pipeline] { 00:00:11.607 [Pipeline] stage 00:00:11.609 [Pipeline] { (Prologue) 00:00:11.771 [Pipeline] sh 00:00:12.061 + logger -p user.info -t JENKINS-CI 00:00:12.083 [Pipeline] echo 00:00:12.084 Node: CYP12 00:00:12.094 [Pipeline] sh 00:00:12.398 [Pipeline] setCustomBuildProperty 00:00:12.410 [Pipeline] echo 00:00:12.412 Cleanup processes 00:00:12.417 [Pipeline] sh 00:00:12.704 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:12.704 1774641 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:12.718 [Pipeline] sh 00:00:13.009 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:13.009 ++ grep -v 'sudo pgrep' 00:00:13.009 ++ awk '{print $1}' 00:00:13.009 + sudo kill -9 00:00:13.009 + true 00:00:13.025 [Pipeline] cleanWs 00:00:13.035 [WS-CLEANUP] Deleting project workspace... 00:00:13.035 [WS-CLEANUP] Deferred wipeout is used... 00:00:13.043 [WS-CLEANUP] done 00:00:13.047 [Pipeline] setCustomBuildProperty 00:00:13.061 [Pipeline] sh 00:00:13.343 + sudo git config --global --replace-all safe.directory '*' 00:00:13.424 [Pipeline] nodesByLabel 00:00:13.426 Found a total of 2 nodes with the 'sorcerer' label 00:00:13.436 [Pipeline] httpRequest 00:00:13.441 HttpMethod: GET 00:00:13.442 URL: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:13.446 Sending request to url: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:13.462 Response Code: HTTP/1.1 200 OK 00:00:13.462 Success: Status code 200 is in the accepted range: 200,404 00:00:13.463 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:22.048 [Pipeline] sh 00:00:22.338 + tar --no-same-owner -xf jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:22.356 [Pipeline] httpRequest 00:00:22.361 HttpMethod: GET 00:00:22.362 URL: http://10.211.164.101/packages/spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:00:22.363 Sending request to url: http://10.211.164.101/packages/spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:00:22.370 Response Code: HTTP/1.1 200 OK 00:00:22.370 Success: Status code 200 is in the accepted range: 200,404 00:00:22.371 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:02:42.053 [Pipeline] sh 00:02:42.343 + tar --no-same-owner -xf spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:02:44.902 [Pipeline] sh 00:02:45.188 + git -C spdk log --oneline -n5 00:02:45.189 9ccef4907 nvme/tcp: fix seq failure handling 00:02:45.189 2a268d7a6 nvme/tcp: move logic from safe ver of req complete 00:02:45.189 8531a41f9 nvme/tcp: add util to cond schedule qpair poll 00:02:45.189 b10f50b08 scripts/pkgdep: Add pkg-config package to {rhel,debian}-based distros 00:02:45.189 89d49f772 pkgdep/debian: Handle PEP 668 00:02:45.201 [Pipeline] } 00:02:45.219 [Pipeline] // stage 00:02:45.228 [Pipeline] stage 00:02:45.230 [Pipeline] { (Prepare) 00:02:45.249 [Pipeline] writeFile 00:02:45.268 [Pipeline] sh 00:02:45.555 + logger -p user.info -t JENKINS-CI 00:02:45.568 [Pipeline] sh 00:02:45.853 + logger -p user.info -t JENKINS-CI 00:02:45.865 [Pipeline] sh 00:02:46.150 + cat autorun-spdk.conf 00:02:46.150 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:46.150 SPDK_TEST_NVMF=1 00:02:46.150 SPDK_TEST_NVME_CLI=1 00:02:46.150 SPDK_TEST_NVMF_NICS=mlx5 00:02:46.150 SPDK_RUN_UBSAN=1 00:02:46.150 NET_TYPE=phy 00:02:46.158 RUN_NIGHTLY=1 00:02:46.162 [Pipeline] readFile 00:02:46.184 [Pipeline] withEnv 00:02:46.186 [Pipeline] { 00:02:46.200 [Pipeline] sh 00:02:46.487 + set -ex 00:02:46.488 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:02:46.488 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:46.488 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:46.488 ++ SPDK_TEST_NVMF=1 00:02:46.488 ++ SPDK_TEST_NVME_CLI=1 00:02:46.488 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:46.488 ++ SPDK_RUN_UBSAN=1 00:02:46.488 ++ NET_TYPE=phy 00:02:46.488 ++ RUN_NIGHTLY=1 00:02:46.488 + case $SPDK_TEST_NVMF_NICS in 00:02:46.488 + DRIVERS=mlx5_ib 00:02:46.488 + [[ -n mlx5_ib ]] 00:02:46.488 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:46.488 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:46.488 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:46.488 rmmod: ERROR: Module irdma is not currently loaded 00:02:46.488 rmmod: ERROR: Module i40iw is not currently loaded 00:02:46.488 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:46.488 + true 00:02:46.488 + for D in $DRIVERS 00:02:46.488 + sudo modprobe mlx5_ib 00:02:46.749 + exit 0 00:02:46.759 [Pipeline] } 00:02:46.779 [Pipeline] // withEnv 00:02:46.784 [Pipeline] } 00:02:46.801 [Pipeline] // stage 00:02:46.812 [Pipeline] catchError 00:02:46.814 [Pipeline] { 00:02:46.829 [Pipeline] timeout 00:02:46.829 Timeout set to expire in 40 min 00:02:46.831 [Pipeline] { 00:02:46.846 [Pipeline] stage 00:02:46.848 [Pipeline] { (Tests) 00:02:46.864 [Pipeline] sh 00:02:47.218 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:02:47.218 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:02:47.218 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:02:47.218 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:02:47.218 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:47.218 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:02:47.218 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:02:47.218 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:47.218 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:02:47.218 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:47.218 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:02:47.218 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:02:47.218 + source /etc/os-release 00:02:47.218 ++ NAME='Fedora Linux' 00:02:47.218 ++ VERSION='38 (Cloud Edition)' 00:02:47.218 ++ ID=fedora 00:02:47.218 ++ VERSION_ID=38 00:02:47.218 ++ VERSION_CODENAME= 00:02:47.218 ++ PLATFORM_ID=platform:f38 00:02:47.218 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:47.218 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:47.218 ++ LOGO=fedora-logo-icon 00:02:47.218 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:47.218 ++ HOME_URL=https://fedoraproject.org/ 00:02:47.218 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:47.218 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:47.218 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:47.218 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:47.218 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:47.218 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:47.218 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:47.218 ++ SUPPORT_END=2024-05-14 00:02:47.218 ++ VARIANT='Cloud Edition' 00:02:47.218 ++ VARIANT_ID=cloud 00:02:47.218 + uname -a 00:02:47.218 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:47.218 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:50.520 Hugepages 00:02:50.520 node hugesize free / total 00:02:50.520 node0 1048576kB 0 / 0 00:02:50.520 node0 2048kB 0 / 0 00:02:50.520 node1 1048576kB 0 / 0 00:02:50.520 node1 2048kB 0 / 0 00:02:50.520 00:02:50.520 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:50.520 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:50.520 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:50.520 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:50.520 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:50.520 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:50.520 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:50.520 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:50.520 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:50.520 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:50.520 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:50.521 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:50.521 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:50.521 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:50.521 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:50.521 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:50.521 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:50.521 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:50.521 + rm -f /tmp/spdk-ld-path 00:02:50.521 + source autorun-spdk.conf 00:02:50.521 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:50.521 ++ SPDK_TEST_NVMF=1 00:02:50.521 ++ SPDK_TEST_NVME_CLI=1 00:02:50.521 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:50.521 ++ SPDK_RUN_UBSAN=1 00:02:50.521 ++ NET_TYPE=phy 00:02:50.521 ++ RUN_NIGHTLY=1 00:02:50.521 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:50.521 + [[ -n '' ]] 00:02:50.521 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:50.521 + for M in /var/spdk/build-*-manifest.txt 00:02:50.521 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:50.521 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:50.521 + for M in /var/spdk/build-*-manifest.txt 00:02:50.521 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:50.521 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:50.521 ++ uname 00:02:50.521 + [[ Linux == \L\i\n\u\x ]] 00:02:50.521 + sudo dmesg -T 00:02:50.521 + sudo dmesg --clear 00:02:50.521 + dmesg_pid=1776222 00:02:50.521 + [[ Fedora Linux == FreeBSD ]] 00:02:50.521 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:50.521 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:50.521 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:50.521 + [[ -x /usr/src/fio-static/fio ]] 00:02:50.521 + export FIO_BIN=/usr/src/fio-static/fio 00:02:50.521 + FIO_BIN=/usr/src/fio-static/fio 00:02:50.521 + sudo dmesg -Tw 00:02:50.521 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:50.521 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:50.521 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:50.521 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:50.521 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:50.521 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:50.521 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:50.521 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:50.521 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:50.521 Test configuration: 00:02:50.521 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:50.521 SPDK_TEST_NVMF=1 00:02:50.521 SPDK_TEST_NVME_CLI=1 00:02:50.521 SPDK_TEST_NVMF_NICS=mlx5 00:02:50.521 SPDK_RUN_UBSAN=1 00:02:50.521 NET_TYPE=phy 00:02:50.521 RUN_NIGHTLY=1 13:30:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:50.521 13:30:43 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:50.521 13:30:43 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.521 13:30:43 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.521 13:30:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.521 13:30:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.521 13:30:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.521 13:30:43 -- paths/export.sh@5 -- $ export PATH 00:02:50.521 13:30:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.521 13:30:43 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:50.521 13:30:43 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:50.521 13:30:43 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718105443.XXXXXX 00:02:50.521 13:30:43 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718105443.1ArqBj 00:02:50.521 13:30:43 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:50.521 13:30:43 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:50.521 13:30:43 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:02:50.521 13:30:43 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:50.521 13:30:43 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:50.521 13:30:43 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:50.521 13:30:43 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:50.521 13:30:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.521 13:30:43 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:02:50.521 13:30:43 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:50.521 13:30:43 -- pm/common@17 -- $ local monitor 00:02:50.521 13:30:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.521 13:30:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.521 13:30:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.521 13:30:43 -- pm/common@21 -- $ date +%s 00:02:50.521 13:30:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.521 13:30:43 -- pm/common@25 -- $ sleep 1 00:02:50.521 13:30:43 -- pm/common@21 -- $ date +%s 00:02:50.521 13:30:43 -- pm/common@21 -- $ date +%s 00:02:50.521 13:30:43 -- pm/common@21 -- $ date +%s 00:02:50.521 13:30:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105443 00:02:50.521 13:30:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105443 00:02:50.521 13:30:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105443 00:02:50.521 13:30:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105443 00:02:50.521 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105443_collect-vmstat.pm.log 00:02:50.521 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105443_collect-cpu-load.pm.log 00:02:50.521 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105443_collect-cpu-temp.pm.log 00:02:50.521 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105443_collect-bmc-pm.bmc.pm.log 00:02:51.463 13:30:44 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:51.463 13:30:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:51.463 13:30:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:51.463 13:30:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:51.463 13:30:44 -- spdk/autobuild.sh@16 -- $ date -u 00:02:51.463 Tue Jun 11 11:30:44 AM UTC 2024 00:02:51.463 13:30:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:51.463 v24.09-pre-65-g9ccef4907 00:02:51.463 13:30:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:51.463 13:30:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:51.463 13:30:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:51.463 13:30:44 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:51.463 13:30:44 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:51.463 13:30:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.463 ************************************ 00:02:51.463 START TEST ubsan 00:02:51.463 ************************************ 00:02:51.463 13:30:44 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:02:51.463 using ubsan 00:02:51.463 00:02:51.463 real 0m0.001s 00:02:51.463 user 0m0.000s 00:02:51.463 sys 0m0.000s 00:02:51.463 13:30:44 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:51.463 13:30:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:51.463 ************************************ 00:02:51.463 END TEST ubsan 00:02:51.463 ************************************ 00:02:51.724 13:30:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:51.724 13:30:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:51.724 13:30:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:51.724 13:30:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:51.724 13:30:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:51.724 13:30:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:51.724 13:30:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:51.724 13:30:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:51.724 13:30:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:02:51.724 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:51.724 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:51.984 Using 'verbs' RDMA provider 00:03:07.827 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:20.050 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:20.050 Creating mk/config.mk...done. 00:03:20.050 Creating mk/cc.flags.mk...done. 00:03:20.050 Type 'make' to build. 00:03:20.050 13:31:12 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:03:20.050 13:31:12 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:03:20.050 13:31:12 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:03:20.050 13:31:12 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.050 ************************************ 00:03:20.050 START TEST make 00:03:20.050 ************************************ 00:03:20.050 13:31:12 make -- common/autotest_common.sh@1124 -- $ make -j144 00:03:20.050 make[1]: Nothing to be done for 'all'. 00:03:28.194 The Meson build system 00:03:28.194 Version: 1.3.1 00:03:28.194 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:03:28.194 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:03:28.194 Build type: native build 00:03:28.194 Program cat found: YES (/usr/bin/cat) 00:03:28.194 Project name: DPDK 00:03:28.194 Project version: 24.03.0 00:03:28.194 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:28.194 C linker for the host machine: cc ld.bfd 2.39-16 00:03:28.194 Host machine cpu family: x86_64 00:03:28.194 Host machine cpu: x86_64 00:03:28.194 Message: ## Building in Developer Mode ## 00:03:28.194 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:28.194 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:28.194 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:28.194 Program python3 found: YES (/usr/bin/python3) 00:03:28.194 Program cat found: YES (/usr/bin/cat) 00:03:28.194 Compiler for C supports arguments -march=native: YES 00:03:28.194 Checking for size of "void *" : 8 00:03:28.194 Checking for size of "void *" : 8 (cached) 00:03:28.194 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:28.194 Library m found: YES 00:03:28.194 Library numa found: YES 00:03:28.194 Has header "numaif.h" : YES 00:03:28.194 Library fdt found: NO 00:03:28.194 Library execinfo found: NO 00:03:28.194 Has header "execinfo.h" : YES 00:03:28.194 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:28.194 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:28.194 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:28.194 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:28.194 Run-time dependency openssl found: YES 3.0.9 00:03:28.194 Run-time dependency libpcap found: YES 1.10.4 00:03:28.194 Has header "pcap.h" with dependency libpcap: YES 00:03:28.194 Compiler for C supports arguments -Wcast-qual: YES 00:03:28.194 Compiler for C supports arguments -Wdeprecated: YES 00:03:28.194 Compiler for C supports arguments -Wformat: YES 00:03:28.194 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:28.194 Compiler for C supports arguments -Wformat-security: NO 00:03:28.194 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:28.194 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:28.194 Compiler for C supports arguments -Wnested-externs: YES 00:03:28.194 Compiler for C supports arguments -Wold-style-definition: YES 00:03:28.194 Compiler for C supports arguments -Wpointer-arith: YES 00:03:28.194 Compiler for C supports arguments -Wsign-compare: YES 00:03:28.194 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:28.194 Compiler for C supports arguments -Wundef: YES 00:03:28.194 Compiler for C supports arguments -Wwrite-strings: YES 00:03:28.194 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:28.194 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:28.194 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:28.194 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:28.194 Program objdump found: YES (/usr/bin/objdump) 00:03:28.194 Compiler for C supports arguments -mavx512f: YES 00:03:28.194 Checking if "AVX512 checking" compiles: YES 00:03:28.194 Fetching value of define "__SSE4_2__" : 1 00:03:28.194 Fetching value of define "__AES__" : 1 00:03:28.194 Fetching value of define "__AVX__" : 1 00:03:28.194 Fetching value of define "__AVX2__" : 1 00:03:28.194 Fetching value of define "__AVX512BW__" : 1 00:03:28.194 Fetching value of define "__AVX512CD__" : 1 00:03:28.194 Fetching value of define "__AVX512DQ__" : 1 00:03:28.194 Fetching value of define "__AVX512F__" : 1 00:03:28.194 Fetching value of define "__AVX512VL__" : 1 00:03:28.194 Fetching value of define "__PCLMUL__" : 1 00:03:28.194 Fetching value of define "__RDRND__" : 1 00:03:28.194 Fetching value of define "__RDSEED__" : 1 00:03:28.194 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:28.194 Fetching value of define "__znver1__" : (undefined) 00:03:28.194 Fetching value of define "__znver2__" : (undefined) 00:03:28.194 Fetching value of define "__znver3__" : (undefined) 00:03:28.194 Fetching value of define "__znver4__" : (undefined) 00:03:28.194 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:28.194 Message: lib/log: Defining dependency "log" 00:03:28.194 Message: lib/kvargs: Defining dependency "kvargs" 00:03:28.194 Message: lib/telemetry: Defining dependency "telemetry" 00:03:28.194 Checking for function "getentropy" : NO 00:03:28.194 Message: lib/eal: Defining dependency "eal" 00:03:28.194 Message: lib/ring: Defining dependency "ring" 00:03:28.194 Message: lib/rcu: Defining dependency "rcu" 00:03:28.194 Message: lib/mempool: Defining dependency "mempool" 00:03:28.194 Message: lib/mbuf: Defining dependency "mbuf" 00:03:28.194 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:28.194 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:28.194 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:28.194 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:28.194 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:28.194 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:28.194 Compiler for C supports arguments -mpclmul: YES 00:03:28.194 Compiler for C supports arguments -maes: YES 00:03:28.194 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:28.194 Compiler for C supports arguments -mavx512bw: YES 00:03:28.194 Compiler for C supports arguments -mavx512dq: YES 00:03:28.194 Compiler for C supports arguments -mavx512vl: YES 00:03:28.194 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:28.194 Compiler for C supports arguments -mavx2: YES 00:03:28.194 Compiler for C supports arguments -mavx: YES 00:03:28.194 Message: lib/net: Defining dependency "net" 00:03:28.194 Message: lib/meter: Defining dependency "meter" 00:03:28.195 Message: lib/ethdev: Defining dependency "ethdev" 00:03:28.195 Message: lib/pci: Defining dependency "pci" 00:03:28.195 Message: lib/cmdline: Defining dependency "cmdline" 00:03:28.195 Message: lib/hash: Defining dependency "hash" 00:03:28.195 Message: lib/timer: Defining dependency "timer" 00:03:28.195 Message: lib/compressdev: Defining dependency "compressdev" 00:03:28.195 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:28.195 Message: lib/dmadev: Defining dependency "dmadev" 00:03:28.195 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:28.195 Message: lib/power: Defining dependency "power" 00:03:28.195 Message: lib/reorder: Defining dependency "reorder" 00:03:28.195 Message: lib/security: Defining dependency "security" 00:03:28.195 Has header "linux/userfaultfd.h" : YES 00:03:28.195 Has header "linux/vduse.h" : YES 00:03:28.195 Message: lib/vhost: Defining dependency "vhost" 00:03:28.195 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:28.195 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:28.195 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:28.195 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:28.195 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:28.195 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:28.195 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:28.195 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:28.195 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:28.195 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:28.195 Program doxygen found: YES (/usr/bin/doxygen) 00:03:28.195 Configuring doxy-api-html.conf using configuration 00:03:28.195 Configuring doxy-api-man.conf using configuration 00:03:28.195 Program mandb found: YES (/usr/bin/mandb) 00:03:28.195 Program sphinx-build found: NO 00:03:28.195 Configuring rte_build_config.h using configuration 00:03:28.195 Message: 00:03:28.195 ================= 00:03:28.195 Applications Enabled 00:03:28.195 ================= 00:03:28.195 00:03:28.195 apps: 00:03:28.195 00:03:28.195 00:03:28.195 Message: 00:03:28.195 ================= 00:03:28.195 Libraries Enabled 00:03:28.195 ================= 00:03:28.195 00:03:28.195 libs: 00:03:28.195 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:28.195 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:28.195 cryptodev, dmadev, power, reorder, security, vhost, 00:03:28.195 00:03:28.195 Message: 00:03:28.195 =============== 00:03:28.195 Drivers Enabled 00:03:28.195 =============== 00:03:28.195 00:03:28.195 common: 00:03:28.195 00:03:28.195 bus: 00:03:28.195 pci, vdev, 00:03:28.195 mempool: 00:03:28.195 ring, 00:03:28.195 dma: 00:03:28.195 00:03:28.195 net: 00:03:28.195 00:03:28.195 crypto: 00:03:28.195 00:03:28.195 compress: 00:03:28.195 00:03:28.195 vdpa: 00:03:28.195 00:03:28.195 00:03:28.195 Message: 00:03:28.195 ================= 00:03:28.195 Content Skipped 00:03:28.195 ================= 00:03:28.195 00:03:28.195 apps: 00:03:28.195 dumpcap: explicitly disabled via build config 00:03:28.195 graph: explicitly disabled via build config 00:03:28.195 pdump: explicitly disabled via build config 00:03:28.195 proc-info: explicitly disabled via build config 00:03:28.195 test-acl: explicitly disabled via build config 00:03:28.195 test-bbdev: explicitly disabled via build config 00:03:28.195 test-cmdline: explicitly disabled via build config 00:03:28.195 test-compress-perf: explicitly disabled via build config 00:03:28.195 test-crypto-perf: explicitly disabled via build config 00:03:28.195 test-dma-perf: explicitly disabled via build config 00:03:28.195 test-eventdev: explicitly disabled via build config 00:03:28.195 test-fib: explicitly disabled via build config 00:03:28.195 test-flow-perf: explicitly disabled via build config 00:03:28.195 test-gpudev: explicitly disabled via build config 00:03:28.195 test-mldev: explicitly disabled via build config 00:03:28.195 test-pipeline: explicitly disabled via build config 00:03:28.195 test-pmd: explicitly disabled via build config 00:03:28.195 test-regex: explicitly disabled via build config 00:03:28.195 test-sad: explicitly disabled via build config 00:03:28.195 test-security-perf: explicitly disabled via build config 00:03:28.195 00:03:28.195 libs: 00:03:28.195 argparse: explicitly disabled via build config 00:03:28.195 metrics: explicitly disabled via build config 00:03:28.195 acl: explicitly disabled via build config 00:03:28.195 bbdev: explicitly disabled via build config 00:03:28.195 bitratestats: explicitly disabled via build config 00:03:28.195 bpf: explicitly disabled via build config 00:03:28.195 cfgfile: explicitly disabled via build config 00:03:28.195 distributor: explicitly disabled via build config 00:03:28.195 efd: explicitly disabled via build config 00:03:28.195 eventdev: explicitly disabled via build config 00:03:28.195 dispatcher: explicitly disabled via build config 00:03:28.195 gpudev: explicitly disabled via build config 00:03:28.195 gro: explicitly disabled via build config 00:03:28.195 gso: explicitly disabled via build config 00:03:28.195 ip_frag: explicitly disabled via build config 00:03:28.195 jobstats: explicitly disabled via build config 00:03:28.195 latencystats: explicitly disabled via build config 00:03:28.195 lpm: explicitly disabled via build config 00:03:28.195 member: explicitly disabled via build config 00:03:28.195 pcapng: explicitly disabled via build config 00:03:28.195 rawdev: explicitly disabled via build config 00:03:28.195 regexdev: explicitly disabled via build config 00:03:28.195 mldev: explicitly disabled via build config 00:03:28.195 rib: explicitly disabled via build config 00:03:28.195 sched: explicitly disabled via build config 00:03:28.195 stack: explicitly disabled via build config 00:03:28.195 ipsec: explicitly disabled via build config 00:03:28.195 pdcp: explicitly disabled via build config 00:03:28.195 fib: explicitly disabled via build config 00:03:28.195 port: explicitly disabled via build config 00:03:28.195 pdump: explicitly disabled via build config 00:03:28.195 table: explicitly disabled via build config 00:03:28.195 pipeline: explicitly disabled via build config 00:03:28.195 graph: explicitly disabled via build config 00:03:28.195 node: explicitly disabled via build config 00:03:28.195 00:03:28.195 drivers: 00:03:28.195 common/cpt: not in enabled drivers build config 00:03:28.195 common/dpaax: not in enabled drivers build config 00:03:28.195 common/iavf: not in enabled drivers build config 00:03:28.195 common/idpf: not in enabled drivers build config 00:03:28.195 common/ionic: not in enabled drivers build config 00:03:28.195 common/mvep: not in enabled drivers build config 00:03:28.195 common/octeontx: not in enabled drivers build config 00:03:28.195 bus/auxiliary: not in enabled drivers build config 00:03:28.195 bus/cdx: not in enabled drivers build config 00:03:28.195 bus/dpaa: not in enabled drivers build config 00:03:28.195 bus/fslmc: not in enabled drivers build config 00:03:28.195 bus/ifpga: not in enabled drivers build config 00:03:28.195 bus/platform: not in enabled drivers build config 00:03:28.195 bus/uacce: not in enabled drivers build config 00:03:28.195 bus/vmbus: not in enabled drivers build config 00:03:28.195 common/cnxk: not in enabled drivers build config 00:03:28.195 common/mlx5: not in enabled drivers build config 00:03:28.195 common/nfp: not in enabled drivers build config 00:03:28.195 common/nitrox: not in enabled drivers build config 00:03:28.195 common/qat: not in enabled drivers build config 00:03:28.195 common/sfc_efx: not in enabled drivers build config 00:03:28.195 mempool/bucket: not in enabled drivers build config 00:03:28.195 mempool/cnxk: not in enabled drivers build config 00:03:28.195 mempool/dpaa: not in enabled drivers build config 00:03:28.195 mempool/dpaa2: not in enabled drivers build config 00:03:28.195 mempool/octeontx: not in enabled drivers build config 00:03:28.195 mempool/stack: not in enabled drivers build config 00:03:28.195 dma/cnxk: not in enabled drivers build config 00:03:28.195 dma/dpaa: not in enabled drivers build config 00:03:28.195 dma/dpaa2: not in enabled drivers build config 00:03:28.195 dma/hisilicon: not in enabled drivers build config 00:03:28.195 dma/idxd: not in enabled drivers build config 00:03:28.195 dma/ioat: not in enabled drivers build config 00:03:28.195 dma/skeleton: not in enabled drivers build config 00:03:28.195 net/af_packet: not in enabled drivers build config 00:03:28.195 net/af_xdp: not in enabled drivers build config 00:03:28.195 net/ark: not in enabled drivers build config 00:03:28.195 net/atlantic: not in enabled drivers build config 00:03:28.195 net/avp: not in enabled drivers build config 00:03:28.195 net/axgbe: not in enabled drivers build config 00:03:28.195 net/bnx2x: not in enabled drivers build config 00:03:28.195 net/bnxt: not in enabled drivers build config 00:03:28.195 net/bonding: not in enabled drivers build config 00:03:28.195 net/cnxk: not in enabled drivers build config 00:03:28.195 net/cpfl: not in enabled drivers build config 00:03:28.195 net/cxgbe: not in enabled drivers build config 00:03:28.195 net/dpaa: not in enabled drivers build config 00:03:28.195 net/dpaa2: not in enabled drivers build config 00:03:28.195 net/e1000: not in enabled drivers build config 00:03:28.195 net/ena: not in enabled drivers build config 00:03:28.195 net/enetc: not in enabled drivers build config 00:03:28.195 net/enetfec: not in enabled drivers build config 00:03:28.195 net/enic: not in enabled drivers build config 00:03:28.195 net/failsafe: not in enabled drivers build config 00:03:28.195 net/fm10k: not in enabled drivers build config 00:03:28.195 net/gve: not in enabled drivers build config 00:03:28.195 net/hinic: not in enabled drivers build config 00:03:28.195 net/hns3: not in enabled drivers build config 00:03:28.195 net/i40e: not in enabled drivers build config 00:03:28.195 net/iavf: not in enabled drivers build config 00:03:28.195 net/ice: not in enabled drivers build config 00:03:28.195 net/idpf: not in enabled drivers build config 00:03:28.195 net/igc: not in enabled drivers build config 00:03:28.195 net/ionic: not in enabled drivers build config 00:03:28.195 net/ipn3ke: not in enabled drivers build config 00:03:28.195 net/ixgbe: not in enabled drivers build config 00:03:28.195 net/mana: not in enabled drivers build config 00:03:28.195 net/memif: not in enabled drivers build config 00:03:28.196 net/mlx4: not in enabled drivers build config 00:03:28.196 net/mlx5: not in enabled drivers build config 00:03:28.196 net/mvneta: not in enabled drivers build config 00:03:28.196 net/mvpp2: not in enabled drivers build config 00:03:28.196 net/netvsc: not in enabled drivers build config 00:03:28.196 net/nfb: not in enabled drivers build config 00:03:28.196 net/nfp: not in enabled drivers build config 00:03:28.196 net/ngbe: not in enabled drivers build config 00:03:28.196 net/null: not in enabled drivers build config 00:03:28.196 net/octeontx: not in enabled drivers build config 00:03:28.196 net/octeon_ep: not in enabled drivers build config 00:03:28.196 net/pcap: not in enabled drivers build config 00:03:28.196 net/pfe: not in enabled drivers build config 00:03:28.196 net/qede: not in enabled drivers build config 00:03:28.196 net/ring: not in enabled drivers build config 00:03:28.196 net/sfc: not in enabled drivers build config 00:03:28.196 net/softnic: not in enabled drivers build config 00:03:28.196 net/tap: not in enabled drivers build config 00:03:28.196 net/thunderx: not in enabled drivers build config 00:03:28.196 net/txgbe: not in enabled drivers build config 00:03:28.196 net/vdev_netvsc: not in enabled drivers build config 00:03:28.196 net/vhost: not in enabled drivers build config 00:03:28.196 net/virtio: not in enabled drivers build config 00:03:28.196 net/vmxnet3: not in enabled drivers build config 00:03:28.196 raw/*: missing internal dependency, "rawdev" 00:03:28.196 crypto/armv8: not in enabled drivers build config 00:03:28.196 crypto/bcmfs: not in enabled drivers build config 00:03:28.196 crypto/caam_jr: not in enabled drivers build config 00:03:28.196 crypto/ccp: not in enabled drivers build config 00:03:28.196 crypto/cnxk: not in enabled drivers build config 00:03:28.196 crypto/dpaa_sec: not in enabled drivers build config 00:03:28.196 crypto/dpaa2_sec: not in enabled drivers build config 00:03:28.196 crypto/ipsec_mb: not in enabled drivers build config 00:03:28.196 crypto/mlx5: not in enabled drivers build config 00:03:28.196 crypto/mvsam: not in enabled drivers build config 00:03:28.196 crypto/nitrox: not in enabled drivers build config 00:03:28.196 crypto/null: not in enabled drivers build config 00:03:28.196 crypto/octeontx: not in enabled drivers build config 00:03:28.196 crypto/openssl: not in enabled drivers build config 00:03:28.196 crypto/scheduler: not in enabled drivers build config 00:03:28.196 crypto/uadk: not in enabled drivers build config 00:03:28.196 crypto/virtio: not in enabled drivers build config 00:03:28.196 compress/isal: not in enabled drivers build config 00:03:28.196 compress/mlx5: not in enabled drivers build config 00:03:28.196 compress/nitrox: not in enabled drivers build config 00:03:28.196 compress/octeontx: not in enabled drivers build config 00:03:28.196 compress/zlib: not in enabled drivers build config 00:03:28.196 regex/*: missing internal dependency, "regexdev" 00:03:28.196 ml/*: missing internal dependency, "mldev" 00:03:28.196 vdpa/ifc: not in enabled drivers build config 00:03:28.196 vdpa/mlx5: not in enabled drivers build config 00:03:28.196 vdpa/nfp: not in enabled drivers build config 00:03:28.196 vdpa/sfc: not in enabled drivers build config 00:03:28.196 event/*: missing internal dependency, "eventdev" 00:03:28.196 baseband/*: missing internal dependency, "bbdev" 00:03:28.196 gpu/*: missing internal dependency, "gpudev" 00:03:28.196 00:03:28.196 00:03:28.196 Build targets in project: 84 00:03:28.196 00:03:28.196 DPDK 24.03.0 00:03:28.196 00:03:28.196 User defined options 00:03:28.196 buildtype : debug 00:03:28.196 default_library : shared 00:03:28.196 libdir : lib 00:03:28.196 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:03:28.196 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:28.196 c_link_args : 00:03:28.196 cpu_instruction_set: native 00:03:28.196 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:03:28.196 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:03:28.196 enable_docs : false 00:03:28.196 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:28.196 enable_kmods : false 00:03:28.196 tests : false 00:03:28.196 00:03:28.196 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:28.196 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:03:28.196 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:28.196 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:28.196 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:28.196 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:28.196 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:28.196 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:28.196 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:28.196 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:28.196 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:28.196 [10/267] Linking static target lib/librte_kvargs.a 00:03:28.196 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:28.196 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:28.196 [13/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:28.196 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:28.196 [15/267] Linking static target lib/librte_log.a 00:03:28.196 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:28.196 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:28.196 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:28.196 [19/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:28.455 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:28.455 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:28.455 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:28.455 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:28.455 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:28.455 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:28.455 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:28.455 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:28.455 [28/267] Linking static target lib/librte_pci.a 00:03:28.455 [29/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:28.455 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:28.455 [31/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:28.455 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:28.455 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:28.455 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:28.455 [35/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:28.455 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:28.455 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:28.455 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:28.714 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.715 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:28.715 [41/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:28.715 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:28.715 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:28.715 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:28.715 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:28.715 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.715 [47/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:28.715 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:28.715 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:28.715 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:28.715 [51/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:28.715 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:28.715 [53/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:28.715 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:28.715 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:28.715 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:28.715 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:28.715 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:28.715 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:28.715 [60/267] Linking static target lib/librte_ring.a 00:03:28.715 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:28.715 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:28.715 [63/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:28.715 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:28.715 [65/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:28.715 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:28.715 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:28.715 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:28.715 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:28.715 [70/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:28.715 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:28.715 [72/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:28.715 [73/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:28.715 [74/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:28.715 [75/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:28.715 [76/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:28.715 [77/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:28.715 [78/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:28.715 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:28.715 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:28.715 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:28.715 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:28.715 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:28.715 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:28.715 [85/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:28.715 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:28.715 [87/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:28.715 [88/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:28.715 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:28.715 [90/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:28.715 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:28.715 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:28.715 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:28.715 [94/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:28.715 [95/267] Linking static target lib/librte_meter.a 00:03:28.715 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:28.715 [97/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:28.715 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:28.715 [99/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:28.715 [100/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:28.715 [101/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:28.715 [102/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:28.715 [103/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:28.715 [104/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:28.715 [105/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:28.715 [106/267] Linking static target lib/librte_timer.a 00:03:28.715 [107/267] Linking static target lib/librte_telemetry.a 00:03:28.715 [108/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:28.715 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:28.715 [110/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:28.715 [111/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:28.715 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:28.715 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:28.715 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:28.715 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:28.975 [116/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:28.975 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:28.975 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:28.975 [119/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:28.975 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:28.975 [121/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:28.975 [122/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:28.975 [123/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:28.975 [124/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:28.975 [125/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:28.975 [126/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:28.975 [127/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:28.975 [128/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:28.975 [129/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:28.975 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:28.975 [131/267] Linking static target lib/librte_cmdline.a 00:03:28.975 [132/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.975 [133/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:28.975 [134/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:28.975 [135/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:28.975 [136/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:28.975 [137/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:28.975 [138/267] Linking static target lib/librte_compressdev.a 00:03:28.975 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:28.975 [140/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:28.975 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:28.975 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:28.975 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:28.975 [144/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:28.975 [145/267] Linking target lib/librte_log.so.24.1 00:03:28.975 [146/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:28.975 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:28.975 [148/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:28.975 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:28.975 [150/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:28.975 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:28.975 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:28.975 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:28.975 [154/267] Linking static target lib/librte_dmadev.a 00:03:28.975 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:28.975 [156/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:28.975 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:28.975 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:28.975 [159/267] Linking static target lib/librte_rcu.a 00:03:28.975 [160/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:28.975 [161/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:28.975 [162/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:28.975 [163/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:28.975 [164/267] Linking static target lib/librte_eal.a 00:03:28.975 [165/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:28.975 [166/267] Linking static target lib/librte_reorder.a 00:03:28.975 [167/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:28.975 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:28.975 [169/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:28.975 [170/267] Linking static target lib/librte_power.a 00:03:28.975 [171/267] Linking static target lib/librte_net.a 00:03:28.975 [172/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:28.975 [173/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:28.975 [174/267] Linking static target lib/librte_mempool.a 00:03:28.975 [175/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:28.975 [176/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:28.975 [177/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:28.975 [178/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:28.975 [179/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:28.975 [180/267] Linking static target lib/librte_security.a 00:03:28.975 [181/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.975 [182/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:28.975 [183/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.975 [184/267] Linking target lib/librte_kvargs.so.24.1 00:03:28.975 [185/267] Linking static target lib/librte_mbuf.a 00:03:28.975 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:28.975 [187/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:28.975 [188/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:28.975 [189/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:29.235 [190/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:29.235 [191/267] Linking static target drivers/librte_bus_vdev.a 00:03:29.235 [192/267] Linking static target lib/librte_cryptodev.a 00:03:29.235 [193/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:29.235 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:29.235 [195/267] Linking static target lib/librte_hash.a 00:03:29.236 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:29.236 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:29.236 [198/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:29.236 [199/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:29.236 [200/267] Linking static target drivers/librte_mempool_ring.a 00:03:29.236 [201/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:29.236 [202/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:29.236 [203/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:29.236 [204/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:29.236 [205/267] Linking static target drivers/librte_bus_pci.a 00:03:29.236 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.236 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:29.236 [208/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.236 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.495 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.495 [211/267] Linking target lib/librte_telemetry.so.24.1 00:03:29.495 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.495 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.495 [214/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:29.495 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:29.754 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.754 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.754 [218/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.754 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:29.755 [220/267] Linking static target lib/librte_ethdev.a 00:03:30.015 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.015 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.015 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.015 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.015 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.275 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.847 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:30.847 [228/267] Linking static target lib/librte_vhost.a 00:03:31.418 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.330 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.916 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.487 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.487 [233/267] Linking target lib/librte_eal.so.24.1 00:03:40.487 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:40.747 [235/267] Linking target lib/librte_ring.so.24.1 00:03:40.747 [236/267] Linking target lib/librte_meter.so.24.1 00:03:40.747 [237/267] Linking target lib/librte_dmadev.so.24.1 00:03:40.747 [238/267] Linking target lib/librte_pci.so.24.1 00:03:40.747 [239/267] Linking target lib/librte_timer.so.24.1 00:03:40.747 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:40.747 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:40.747 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:40.747 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:40.747 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:40.747 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:40.747 [246/267] Linking target lib/librte_rcu.so.24.1 00:03:40.747 [247/267] Linking target lib/librte_mempool.so.24.1 00:03:40.747 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:41.007 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:41.007 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:41.007 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:41.007 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:41.267 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:41.267 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:03:41.267 [255/267] Linking target lib/librte_compressdev.so.24.1 00:03:41.267 [256/267] Linking target lib/librte_reorder.so.24.1 00:03:41.267 [257/267] Linking target lib/librte_net.so.24.1 00:03:41.267 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:41.267 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:41.528 [260/267] Linking target lib/librte_hash.so.24.1 00:03:41.528 [261/267] Linking target lib/librte_security.so.24.1 00:03:41.528 [262/267] Linking target lib/librte_cmdline.so.24.1 00:03:41.528 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:41.528 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:41.528 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:41.528 [266/267] Linking target lib/librte_power.so.24.1 00:03:41.788 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:41.788 INFO: autodetecting backend as ninja 00:03:41.788 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:42.728 CC lib/ut_mock/mock.o 00:03:42.728 CC lib/ut/ut.o 00:03:42.728 CC lib/log/log.o 00:03:42.728 CC lib/log/log_flags.o 00:03:42.728 CC lib/log/log_deprecated.o 00:03:42.988 LIB libspdk_ut_mock.a 00:03:42.988 LIB libspdk_ut.a 00:03:42.988 SO libspdk_ut_mock.so.6.0 00:03:42.988 LIB libspdk_log.a 00:03:42.988 SO libspdk_ut.so.2.0 00:03:42.988 SO libspdk_log.so.7.0 00:03:42.988 SYMLINK libspdk_ut_mock.so 00:03:42.988 SYMLINK libspdk_ut.so 00:03:42.988 SYMLINK libspdk_log.so 00:03:43.249 CC lib/ioat/ioat.o 00:03:43.249 CC lib/dma/dma.o 00:03:43.509 CC lib/util/base64.o 00:03:43.509 CXX lib/trace_parser/trace.o 00:03:43.509 CC lib/util/bit_array.o 00:03:43.509 CC lib/util/cpuset.o 00:03:43.509 CC lib/util/crc16.o 00:03:43.509 CC lib/util/crc32.o 00:03:43.509 CC lib/util/crc32c.o 00:03:43.509 CC lib/util/crc32_ieee.o 00:03:43.509 CC lib/util/crc64.o 00:03:43.509 CC lib/util/dif.o 00:03:43.509 CC lib/util/fd.o 00:03:43.509 CC lib/util/file.o 00:03:43.510 CC lib/util/hexlify.o 00:03:43.510 CC lib/util/iov.o 00:03:43.510 CC lib/util/math.o 00:03:43.510 CC lib/util/pipe.o 00:03:43.510 CC lib/util/strerror_tls.o 00:03:43.510 CC lib/util/string.o 00:03:43.510 CC lib/util/uuid.o 00:03:43.510 CC lib/util/fd_group.o 00:03:43.510 CC lib/util/xor.o 00:03:43.510 CC lib/util/zipf.o 00:03:43.510 LIB libspdk_dma.a 00:03:43.510 CC lib/vfio_user/host/vfio_user_pci.o 00:03:43.510 CC lib/vfio_user/host/vfio_user.o 00:03:43.510 SO libspdk_dma.so.4.0 00:03:43.798 LIB libspdk_ioat.a 00:03:43.798 SO libspdk_ioat.so.7.0 00:03:43.798 SYMLINK libspdk_dma.so 00:03:43.798 SYMLINK libspdk_ioat.so 00:03:43.798 LIB libspdk_vfio_user.a 00:03:43.798 SO libspdk_vfio_user.so.5.0 00:03:43.798 LIB libspdk_util.a 00:03:44.097 SYMLINK libspdk_vfio_user.so 00:03:44.097 SO libspdk_util.so.9.0 00:03:44.097 SYMLINK libspdk_util.so 00:03:44.097 LIB libspdk_trace_parser.a 00:03:44.359 SO libspdk_trace_parser.so.5.0 00:03:44.359 SYMLINK libspdk_trace_parser.so 00:03:44.359 CC lib/idxd/idxd.o 00:03:44.359 CC lib/conf/conf.o 00:03:44.359 CC lib/vmd/vmd.o 00:03:44.359 CC lib/idxd/idxd_user.o 00:03:44.359 CC lib/vmd/led.o 00:03:44.359 CC lib/idxd/idxd_kernel.o 00:03:44.359 CC lib/rdma/common.o 00:03:44.359 CC lib/rdma/rdma_verbs.o 00:03:44.359 CC lib/env_dpdk/env.o 00:03:44.359 CC lib/json/json_parse.o 00:03:44.359 CC lib/env_dpdk/memory.o 00:03:44.359 CC lib/json/json_util.o 00:03:44.359 CC lib/env_dpdk/pci.o 00:03:44.359 CC lib/env_dpdk/init.o 00:03:44.359 CC lib/json/json_write.o 00:03:44.359 CC lib/env_dpdk/threads.o 00:03:44.359 CC lib/env_dpdk/pci_ioat.o 00:03:44.359 CC lib/env_dpdk/pci_virtio.o 00:03:44.359 CC lib/env_dpdk/pci_vmd.o 00:03:44.359 CC lib/env_dpdk/pci_idxd.o 00:03:44.359 CC lib/env_dpdk/pci_event.o 00:03:44.359 CC lib/env_dpdk/sigbus_handler.o 00:03:44.359 CC lib/env_dpdk/pci_dpdk.o 00:03:44.359 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:44.359 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:44.620 LIB libspdk_conf.a 00:03:44.620 LIB libspdk_rdma.a 00:03:44.620 SO libspdk_conf.so.6.0 00:03:44.880 LIB libspdk_json.a 00:03:44.880 SO libspdk_rdma.so.6.0 00:03:44.880 SO libspdk_json.so.6.0 00:03:44.880 SYMLINK libspdk_conf.so 00:03:44.880 SYMLINK libspdk_rdma.so 00:03:44.880 SYMLINK libspdk_json.so 00:03:44.880 LIB libspdk_idxd.a 00:03:44.880 SO libspdk_idxd.so.12.0 00:03:45.141 LIB libspdk_vmd.a 00:03:45.141 SYMLINK libspdk_idxd.so 00:03:45.141 SO libspdk_vmd.so.6.0 00:03:45.141 SYMLINK libspdk_vmd.so 00:03:45.141 CC lib/jsonrpc/jsonrpc_server.o 00:03:45.141 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:45.141 CC lib/jsonrpc/jsonrpc_client.o 00:03:45.141 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:45.401 LIB libspdk_jsonrpc.a 00:03:45.401 SO libspdk_jsonrpc.so.6.0 00:03:45.663 SYMLINK libspdk_jsonrpc.so 00:03:45.663 LIB libspdk_env_dpdk.a 00:03:45.663 SO libspdk_env_dpdk.so.14.1 00:03:45.923 SYMLINK libspdk_env_dpdk.so 00:03:45.923 CC lib/rpc/rpc.o 00:03:46.184 LIB libspdk_rpc.a 00:03:46.184 SO libspdk_rpc.so.6.0 00:03:46.184 SYMLINK libspdk_rpc.so 00:03:46.754 CC lib/keyring/keyring.o 00:03:46.754 CC lib/keyring/keyring_rpc.o 00:03:46.754 CC lib/trace/trace.o 00:03:46.754 CC lib/trace/trace_flags.o 00:03:46.754 CC lib/trace/trace_rpc.o 00:03:46.754 CC lib/notify/notify.o 00:03:46.754 CC lib/notify/notify_rpc.o 00:03:46.754 LIB libspdk_notify.a 00:03:46.754 LIB libspdk_keyring.a 00:03:46.754 SO libspdk_notify.so.6.0 00:03:46.754 LIB libspdk_trace.a 00:03:46.754 SO libspdk_keyring.so.1.0 00:03:46.754 SO libspdk_trace.so.10.0 00:03:46.754 SYMLINK libspdk_notify.so 00:03:47.014 SYMLINK libspdk_keyring.so 00:03:47.014 SYMLINK libspdk_trace.so 00:03:47.274 CC lib/thread/iobuf.o 00:03:47.274 CC lib/thread/thread.o 00:03:47.274 CC lib/sock/sock.o 00:03:47.274 CC lib/sock/sock_rpc.o 00:03:47.535 LIB libspdk_sock.a 00:03:47.796 SO libspdk_sock.so.9.0 00:03:47.796 SYMLINK libspdk_sock.so 00:03:48.057 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:48.057 CC lib/nvme/nvme_ctrlr.o 00:03:48.057 CC lib/nvme/nvme_fabric.o 00:03:48.057 CC lib/nvme/nvme_ns_cmd.o 00:03:48.057 CC lib/nvme/nvme_pcie.o 00:03:48.057 CC lib/nvme/nvme_ns.o 00:03:48.057 CC lib/nvme/nvme_pcie_common.o 00:03:48.057 CC lib/nvme/nvme_qpair.o 00:03:48.057 CC lib/nvme/nvme.o 00:03:48.057 CC lib/nvme/nvme_quirks.o 00:03:48.057 CC lib/nvme/nvme_transport.o 00:03:48.057 CC lib/nvme/nvme_discovery.o 00:03:48.057 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:48.057 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:48.057 CC lib/nvme/nvme_io_msg.o 00:03:48.057 CC lib/nvme/nvme_tcp.o 00:03:48.057 CC lib/nvme/nvme_opal.o 00:03:48.057 CC lib/nvme/nvme_poll_group.o 00:03:48.057 CC lib/nvme/nvme_zns.o 00:03:48.057 CC lib/nvme/nvme_stubs.o 00:03:48.057 CC lib/nvme/nvme_auth.o 00:03:48.057 CC lib/nvme/nvme_cuse.o 00:03:48.057 CC lib/nvme/nvme_rdma.o 00:03:48.629 LIB libspdk_thread.a 00:03:48.629 SO libspdk_thread.so.10.0 00:03:48.629 SYMLINK libspdk_thread.so 00:03:49.201 CC lib/blob/blobstore.o 00:03:49.201 CC lib/blob/request.o 00:03:49.201 CC lib/blob/zeroes.o 00:03:49.201 CC lib/blob/blob_bs_dev.o 00:03:49.201 CC lib/accel/accel.o 00:03:49.201 CC lib/accel/accel_rpc.o 00:03:49.201 CC lib/accel/accel_sw.o 00:03:49.201 CC lib/virtio/virtio.o 00:03:49.201 CC lib/init/json_config.o 00:03:49.201 CC lib/virtio/virtio_vhost_user.o 00:03:49.201 CC lib/init/subsystem.o 00:03:49.201 CC lib/virtio/virtio_vfio_user.o 00:03:49.201 CC lib/init/subsystem_rpc.o 00:03:49.201 CC lib/virtio/virtio_pci.o 00:03:49.201 CC lib/init/rpc.o 00:03:49.201 LIB libspdk_init.a 00:03:49.201 SO libspdk_init.so.5.0 00:03:49.460 LIB libspdk_virtio.a 00:03:49.461 SYMLINK libspdk_init.so 00:03:49.461 SO libspdk_virtio.so.7.0 00:03:49.461 SYMLINK libspdk_virtio.so 00:03:49.721 CC lib/event/app.o 00:03:49.721 CC lib/event/reactor.o 00:03:49.721 CC lib/event/log_rpc.o 00:03:49.721 CC lib/event/app_rpc.o 00:03:49.721 CC lib/event/scheduler_static.o 00:03:49.981 LIB libspdk_accel.a 00:03:49.981 LIB libspdk_nvme.a 00:03:49.981 SO libspdk_accel.so.15.0 00:03:49.981 SO libspdk_nvme.so.13.0 00:03:49.981 SYMLINK libspdk_accel.so 00:03:49.981 LIB libspdk_event.a 00:03:49.981 SO libspdk_event.so.13.1 00:03:50.242 SYMLINK libspdk_event.so 00:03:50.242 SYMLINK libspdk_nvme.so 00:03:50.242 CC lib/bdev/bdev.o 00:03:50.242 CC lib/bdev/bdev_rpc.o 00:03:50.242 CC lib/bdev/bdev_zone.o 00:03:50.242 CC lib/bdev/part.o 00:03:50.242 CC lib/bdev/scsi_nvme.o 00:03:51.628 LIB libspdk_blob.a 00:03:51.628 SO libspdk_blob.so.11.0 00:03:51.628 SYMLINK libspdk_blob.so 00:03:51.890 CC lib/lvol/lvol.o 00:03:51.890 CC lib/blobfs/blobfs.o 00:03:51.890 CC lib/blobfs/tree.o 00:03:52.461 LIB libspdk_bdev.a 00:03:52.461 SO libspdk_bdev.so.15.0 00:03:52.461 LIB libspdk_blobfs.a 00:03:52.721 SO libspdk_blobfs.so.10.0 00:03:52.721 SYMLINK libspdk_bdev.so 00:03:52.721 LIB libspdk_lvol.a 00:03:52.721 SO libspdk_lvol.so.10.0 00:03:52.721 SYMLINK libspdk_blobfs.so 00:03:52.721 SYMLINK libspdk_lvol.so 00:03:52.981 CC lib/ftl/ftl_init.o 00:03:52.981 CC lib/ftl/ftl_core.o 00:03:52.981 CC lib/ftl/ftl_layout.o 00:03:52.981 CC lib/ftl/ftl_debug.o 00:03:52.981 CC lib/ublk/ublk.o 00:03:52.981 CC lib/ftl/ftl_io.o 00:03:52.981 CC lib/ublk/ublk_rpc.o 00:03:52.981 CC lib/ftl/ftl_sb.o 00:03:52.981 CC lib/ftl/ftl_l2p_flat.o 00:03:52.981 CC lib/ftl/ftl_l2p.o 00:03:52.981 CC lib/scsi/dev.o 00:03:52.981 CC lib/ftl/ftl_nv_cache.o 00:03:52.981 CC lib/scsi/lun.o 00:03:52.981 CC lib/nbd/nbd.o 00:03:52.981 CC lib/ftl/ftl_band.o 00:03:52.981 CC lib/nvmf/ctrlr.o 00:03:52.981 CC lib/scsi/port.o 00:03:52.981 CC lib/ftl/ftl_band_ops.o 00:03:52.981 CC lib/nbd/nbd_rpc.o 00:03:52.981 CC lib/scsi/scsi.o 00:03:52.981 CC lib/nvmf/ctrlr_discovery.o 00:03:52.981 CC lib/scsi/scsi_bdev.o 00:03:52.981 CC lib/ftl/ftl_writer.o 00:03:52.981 CC lib/nvmf/ctrlr_bdev.o 00:03:52.981 CC lib/scsi/scsi_pr.o 00:03:52.981 CC lib/ftl/ftl_rq.o 00:03:52.981 CC lib/nvmf/subsystem.o 00:03:52.981 CC lib/scsi/scsi_rpc.o 00:03:52.981 CC lib/ftl/ftl_reloc.o 00:03:52.981 CC lib/nvmf/nvmf.o 00:03:52.981 CC lib/nvmf/nvmf_rpc.o 00:03:52.981 CC lib/scsi/task.o 00:03:52.981 CC lib/nvmf/transport.o 00:03:52.981 CC lib/ftl/ftl_l2p_cache.o 00:03:52.981 CC lib/ftl/mngt/ftl_mngt.o 00:03:52.981 CC lib/nvmf/tcp.o 00:03:52.981 CC lib/ftl/ftl_p2l.o 00:03:52.981 CC lib/nvmf/stubs.o 00:03:52.981 CC lib/nvmf/mdns_server.o 00:03:52.981 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:52.981 CC lib/nvmf/rdma.o 00:03:52.981 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:52.981 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:52.981 CC lib/nvmf/auth.o 00:03:52.981 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:52.981 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:52.982 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:52.982 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:52.982 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:52.982 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:52.982 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:52.982 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:52.982 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:52.982 CC lib/ftl/utils/ftl_conf.o 00:03:52.982 CC lib/ftl/utils/ftl_md.o 00:03:52.982 CC lib/ftl/utils/ftl_mempool.o 00:03:52.982 CC lib/ftl/utils/ftl_bitmap.o 00:03:52.982 CC lib/ftl/utils/ftl_property.o 00:03:52.982 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:52.982 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:52.982 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:52.982 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:52.982 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:52.982 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:52.982 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:52.982 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:52.982 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:52.982 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:52.982 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:52.982 CC lib/ftl/base/ftl_base_dev.o 00:03:52.982 CC lib/ftl/base/ftl_base_bdev.o 00:03:52.982 CC lib/ftl/ftl_trace.o 00:03:53.547 LIB libspdk_nbd.a 00:03:53.547 SO libspdk_nbd.so.7.0 00:03:53.547 LIB libspdk_scsi.a 00:03:53.547 SYMLINK libspdk_nbd.so 00:03:53.547 SO libspdk_scsi.so.9.0 00:03:53.547 LIB libspdk_ublk.a 00:03:53.806 SO libspdk_ublk.so.3.0 00:03:53.806 SYMLINK libspdk_scsi.so 00:03:53.806 SYMLINK libspdk_ublk.so 00:03:54.066 LIB libspdk_ftl.a 00:03:54.066 CC lib/vhost/vhost.o 00:03:54.066 CC lib/vhost/vhost_rpc.o 00:03:54.066 CC lib/vhost/vhost_scsi.o 00:03:54.066 CC lib/vhost/vhost_blk.o 00:03:54.066 CC lib/vhost/rte_vhost_user.o 00:03:54.066 CC lib/iscsi/conn.o 00:03:54.066 CC lib/iscsi/init_grp.o 00:03:54.066 CC lib/iscsi/iscsi.o 00:03:54.066 CC lib/iscsi/md5.o 00:03:54.066 CC lib/iscsi/param.o 00:03:54.066 CC lib/iscsi/iscsi_subsystem.o 00:03:54.066 CC lib/iscsi/portal_grp.o 00:03:54.066 CC lib/iscsi/tgt_node.o 00:03:54.066 CC lib/iscsi/iscsi_rpc.o 00:03:54.066 CC lib/iscsi/task.o 00:03:54.066 SO libspdk_ftl.so.9.0 00:03:54.326 SYMLINK libspdk_ftl.so 00:03:54.326 LIB libspdk_nvmf.a 00:03:54.588 SO libspdk_nvmf.so.18.1 00:03:54.588 SYMLINK libspdk_nvmf.so 00:03:54.850 LIB libspdk_vhost.a 00:03:55.110 SO libspdk_vhost.so.8.0 00:03:55.110 SYMLINK libspdk_vhost.so 00:03:55.110 LIB libspdk_iscsi.a 00:03:55.371 SO libspdk_iscsi.so.8.0 00:03:55.371 SYMLINK libspdk_iscsi.so 00:03:55.943 CC module/env_dpdk/env_dpdk_rpc.o 00:03:56.203 CC module/accel/iaa/accel_iaa_rpc.o 00:03:56.203 CC module/accel/iaa/accel_iaa.o 00:03:56.203 CC module/blob/bdev/blob_bdev.o 00:03:56.203 CC module/sock/posix/posix.o 00:03:56.203 LIB libspdk_env_dpdk_rpc.a 00:03:56.203 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:56.203 CC module/keyring/linux/keyring.o 00:03:56.203 CC module/accel/ioat/accel_ioat.o 00:03:56.203 CC module/keyring/linux/keyring_rpc.o 00:03:56.203 CC module/accel/dsa/accel_dsa.o 00:03:56.203 CC module/keyring/file/keyring.o 00:03:56.203 CC module/accel/ioat/accel_ioat_rpc.o 00:03:56.203 CC module/keyring/file/keyring_rpc.o 00:03:56.203 CC module/accel/dsa/accel_dsa_rpc.o 00:03:56.203 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:56.203 CC module/accel/error/accel_error.o 00:03:56.203 CC module/accel/error/accel_error_rpc.o 00:03:56.203 CC module/scheduler/gscheduler/gscheduler.o 00:03:56.203 SO libspdk_env_dpdk_rpc.so.6.0 00:03:56.203 SYMLINK libspdk_env_dpdk_rpc.so 00:03:56.203 LIB libspdk_keyring_file.a 00:03:56.203 LIB libspdk_scheduler_dpdk_governor.a 00:03:56.203 LIB libspdk_keyring_linux.a 00:03:56.203 LIB libspdk_scheduler_dynamic.a 00:03:56.203 LIB libspdk_scheduler_gscheduler.a 00:03:56.203 SO libspdk_keyring_file.so.1.0 00:03:56.203 SO libspdk_keyring_linux.so.1.0 00:03:56.203 LIB libspdk_accel_iaa.a 00:03:56.203 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:56.203 LIB libspdk_accel_ioat.a 00:03:56.464 LIB libspdk_accel_error.a 00:03:56.464 SO libspdk_scheduler_gscheduler.so.4.0 00:03:56.464 SO libspdk_scheduler_dynamic.so.4.0 00:03:56.464 SO libspdk_accel_iaa.so.3.0 00:03:56.464 SO libspdk_accel_ioat.so.6.0 00:03:56.464 SO libspdk_accel_error.so.2.0 00:03:56.464 SYMLINK libspdk_keyring_file.so 00:03:56.464 LIB libspdk_blob_bdev.a 00:03:56.464 SYMLINK libspdk_keyring_linux.so 00:03:56.464 LIB libspdk_accel_dsa.a 00:03:56.464 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:56.464 SYMLINK libspdk_scheduler_gscheduler.so 00:03:56.464 SYMLINK libspdk_scheduler_dynamic.so 00:03:56.464 SO libspdk_accel_dsa.so.5.0 00:03:56.464 SYMLINK libspdk_accel_ioat.so 00:03:56.464 SO libspdk_blob_bdev.so.11.0 00:03:56.464 SYMLINK libspdk_accel_iaa.so 00:03:56.464 SYMLINK libspdk_accel_error.so 00:03:56.464 SYMLINK libspdk_accel_dsa.so 00:03:56.464 SYMLINK libspdk_blob_bdev.so 00:03:56.725 LIB libspdk_sock_posix.a 00:03:56.725 SO libspdk_sock_posix.so.6.0 00:03:56.985 SYMLINK libspdk_sock_posix.so 00:03:56.985 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:56.985 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:56.985 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:56.985 CC module/bdev/lvol/vbdev_lvol.o 00:03:56.985 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:56.985 CC module/bdev/raid/bdev_raid.o 00:03:56.985 CC module/bdev/raid/bdev_raid_rpc.o 00:03:56.985 CC module/bdev/error/vbdev_error.o 00:03:56.985 CC module/bdev/split/vbdev_split.o 00:03:56.985 CC module/bdev/raid/bdev_raid_sb.o 00:03:56.985 CC module/bdev/raid/raid0.o 00:03:56.985 CC module/bdev/error/vbdev_error_rpc.o 00:03:56.985 CC module/bdev/split/vbdev_split_rpc.o 00:03:56.985 CC module/bdev/raid/raid1.o 00:03:56.985 CC module/bdev/raid/concat.o 00:03:56.985 CC module/bdev/null/bdev_null.o 00:03:56.985 CC module/bdev/malloc/bdev_malloc.o 00:03:56.985 CC module/blobfs/bdev/blobfs_bdev.o 00:03:56.985 CC module/bdev/gpt/gpt.o 00:03:56.985 CC module/bdev/null/bdev_null_rpc.o 00:03:56.985 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:56.985 CC module/bdev/passthru/vbdev_passthru.o 00:03:56.985 CC module/bdev/aio/bdev_aio.o 00:03:56.985 CC module/bdev/nvme/bdev_nvme.o 00:03:56.985 CC module/bdev/gpt/vbdev_gpt.o 00:03:56.985 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:56.985 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:56.985 CC module/bdev/aio/bdev_aio_rpc.o 00:03:56.985 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:56.985 CC module/bdev/nvme/nvme_rpc.o 00:03:56.985 CC module/bdev/nvme/bdev_mdns_client.o 00:03:56.985 CC module/bdev/nvme/vbdev_opal.o 00:03:56.985 CC module/bdev/ftl/bdev_ftl.o 00:03:56.985 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:56.985 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:56.985 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:56.985 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:56.985 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:56.985 CC module/bdev/delay/vbdev_delay.o 00:03:56.985 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:56.985 CC module/bdev/iscsi/bdev_iscsi.o 00:03:56.985 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:57.245 LIB libspdk_blobfs_bdev.a 00:03:57.245 SO libspdk_blobfs_bdev.so.6.0 00:03:57.245 LIB libspdk_bdev_split.a 00:03:57.245 LIB libspdk_bdev_error.a 00:03:57.245 LIB libspdk_bdev_null.a 00:03:57.245 LIB libspdk_bdev_gpt.a 00:03:57.245 LIB libspdk_bdev_passthru.a 00:03:57.245 SO libspdk_bdev_error.so.6.0 00:03:57.245 SO libspdk_bdev_split.so.6.0 00:03:57.245 SYMLINK libspdk_blobfs_bdev.so 00:03:57.245 LIB libspdk_bdev_ftl.a 00:03:57.506 SO libspdk_bdev_passthru.so.6.0 00:03:57.506 SO libspdk_bdev_null.so.6.0 00:03:57.506 LIB libspdk_bdev_aio.a 00:03:57.506 SO libspdk_bdev_gpt.so.6.0 00:03:57.506 LIB libspdk_bdev_zone_block.a 00:03:57.506 LIB libspdk_bdev_malloc.a 00:03:57.506 SO libspdk_bdev_aio.so.6.0 00:03:57.506 SO libspdk_bdev_zone_block.so.6.0 00:03:57.506 SO libspdk_bdev_ftl.so.6.0 00:03:57.506 SYMLINK libspdk_bdev_error.so 00:03:57.506 LIB libspdk_bdev_delay.a 00:03:57.506 SYMLINK libspdk_bdev_split.so 00:03:57.506 SYMLINK libspdk_bdev_gpt.so 00:03:57.506 LIB libspdk_bdev_iscsi.a 00:03:57.506 SYMLINK libspdk_bdev_null.so 00:03:57.506 SO libspdk_bdev_malloc.so.6.0 00:03:57.506 SYMLINK libspdk_bdev_passthru.so 00:03:57.506 SO libspdk_bdev_delay.so.6.0 00:03:57.506 SYMLINK libspdk_bdev_aio.so 00:03:57.506 SO libspdk_bdev_iscsi.so.6.0 00:03:57.506 SYMLINK libspdk_bdev_zone_block.so 00:03:57.506 SYMLINK libspdk_bdev_ftl.so 00:03:57.506 LIB libspdk_bdev_lvol.a 00:03:57.506 SYMLINK libspdk_bdev_malloc.so 00:03:57.506 LIB libspdk_bdev_virtio.a 00:03:57.506 SYMLINK libspdk_bdev_iscsi.so 00:03:57.506 SO libspdk_bdev_lvol.so.6.0 00:03:57.506 SYMLINK libspdk_bdev_delay.so 00:03:57.506 SO libspdk_bdev_virtio.so.6.0 00:03:57.506 SYMLINK libspdk_bdev_lvol.so 00:03:57.767 SYMLINK libspdk_bdev_virtio.so 00:03:57.767 LIB libspdk_bdev_raid.a 00:03:58.028 SO libspdk_bdev_raid.so.6.0 00:03:58.028 SYMLINK libspdk_bdev_raid.so 00:03:58.970 LIB libspdk_bdev_nvme.a 00:03:58.970 SO libspdk_bdev_nvme.so.7.0 00:03:58.970 SYMLINK libspdk_bdev_nvme.so 00:03:59.911 CC module/event/subsystems/iobuf/iobuf.o 00:03:59.911 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:59.911 CC module/event/subsystems/sock/sock.o 00:03:59.911 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:59.911 CC module/event/subsystems/keyring/keyring.o 00:03:59.911 CC module/event/subsystems/vmd/vmd.o 00:03:59.911 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:59.911 CC module/event/subsystems/scheduler/scheduler.o 00:03:59.911 LIB libspdk_event_sock.a 00:03:59.911 LIB libspdk_event_vmd.a 00:03:59.911 LIB libspdk_event_keyring.a 00:03:59.911 LIB libspdk_event_scheduler.a 00:03:59.911 LIB libspdk_event_vhost_blk.a 00:03:59.911 LIB libspdk_event_iobuf.a 00:03:59.911 SO libspdk_event_sock.so.5.0 00:03:59.911 SO libspdk_event_iobuf.so.3.0 00:03:59.911 SO libspdk_event_keyring.so.1.0 00:03:59.911 SO libspdk_event_vhost_blk.so.3.0 00:03:59.911 SO libspdk_event_vmd.so.6.0 00:03:59.911 SO libspdk_event_scheduler.so.4.0 00:04:00.173 SYMLINK libspdk_event_sock.so 00:04:00.173 SYMLINK libspdk_event_vhost_blk.so 00:04:00.173 SYMLINK libspdk_event_keyring.so 00:04:00.173 SYMLINK libspdk_event_iobuf.so 00:04:00.173 SYMLINK libspdk_event_scheduler.so 00:04:00.173 SYMLINK libspdk_event_vmd.so 00:04:00.433 CC module/event/subsystems/accel/accel.o 00:04:00.433 LIB libspdk_event_accel.a 00:04:00.693 SO libspdk_event_accel.so.6.0 00:04:00.693 SYMLINK libspdk_event_accel.so 00:04:00.954 CC module/event/subsystems/bdev/bdev.o 00:04:01.215 LIB libspdk_event_bdev.a 00:04:01.215 SO libspdk_event_bdev.so.6.0 00:04:01.215 SYMLINK libspdk_event_bdev.so 00:04:01.478 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:01.478 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:01.740 CC module/event/subsystems/scsi/scsi.o 00:04:01.740 CC module/event/subsystems/ublk/ublk.o 00:04:01.740 CC module/event/subsystems/nbd/nbd.o 00:04:01.740 LIB libspdk_event_scsi.a 00:04:01.740 LIB libspdk_event_nbd.a 00:04:01.740 LIB libspdk_event_ublk.a 00:04:01.740 SO libspdk_event_scsi.so.6.0 00:04:01.740 SO libspdk_event_ublk.so.3.0 00:04:01.740 SO libspdk_event_nbd.so.6.0 00:04:01.740 LIB libspdk_event_nvmf.a 00:04:02.001 SYMLINK libspdk_event_ublk.so 00:04:02.001 SYMLINK libspdk_event_scsi.so 00:04:02.001 SYMLINK libspdk_event_nbd.so 00:04:02.001 SO libspdk_event_nvmf.so.6.0 00:04:02.001 SYMLINK libspdk_event_nvmf.so 00:04:02.263 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:02.263 CC module/event/subsystems/iscsi/iscsi.o 00:04:02.263 LIB libspdk_event_vhost_scsi.a 00:04:02.523 LIB libspdk_event_iscsi.a 00:04:02.523 SO libspdk_event_vhost_scsi.so.3.0 00:04:02.523 SO libspdk_event_iscsi.so.6.0 00:04:02.523 SYMLINK libspdk_event_vhost_scsi.so 00:04:02.523 SYMLINK libspdk_event_iscsi.so 00:04:02.784 SO libspdk.so.6.0 00:04:02.784 SYMLINK libspdk.so 00:04:03.045 CC app/trace_record/trace_record.o 00:04:03.045 CXX app/trace/trace.o 00:04:03.045 CC app/spdk_top/spdk_top.o 00:04:03.045 CC test/rpc_client/rpc_client_test.o 00:04:03.045 CC app/spdk_lspci/spdk_lspci.o 00:04:03.045 CC app/spdk_nvme_discover/discovery_aer.o 00:04:03.045 CC app/spdk_nvme_identify/identify.o 00:04:03.045 CC app/spdk_nvme_perf/perf.o 00:04:03.045 CC app/nvmf_tgt/nvmf_main.o 00:04:03.045 TEST_HEADER include/spdk/accel_module.h 00:04:03.045 TEST_HEADER include/spdk/assert.h 00:04:03.045 TEST_HEADER include/spdk/accel.h 00:04:03.045 TEST_HEADER include/spdk/barrier.h 00:04:03.045 TEST_HEADER include/spdk/base64.h 00:04:03.045 TEST_HEADER include/spdk/bdev.h 00:04:03.045 TEST_HEADER include/spdk/bdev_module.h 00:04:03.045 TEST_HEADER include/spdk/bit_array.h 00:04:03.045 TEST_HEADER include/spdk/bdev_zone.h 00:04:03.045 TEST_HEADER include/spdk/bit_pool.h 00:04:03.045 TEST_HEADER include/spdk/blob_bdev.h 00:04:03.045 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:03.045 TEST_HEADER include/spdk/blobfs.h 00:04:03.045 TEST_HEADER include/spdk/blob.h 00:04:03.045 CC app/spdk_dd/spdk_dd.o 00:04:03.045 TEST_HEADER include/spdk/conf.h 00:04:03.309 CC app/vhost/vhost.o 00:04:03.309 TEST_HEADER include/spdk/config.h 00:04:03.309 TEST_HEADER include/spdk/cpuset.h 00:04:03.309 TEST_HEADER include/spdk/crc16.h 00:04:03.309 CC app/spdk_tgt/spdk_tgt.o 00:04:03.309 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:03.309 TEST_HEADER include/spdk/crc32.h 00:04:03.309 TEST_HEADER include/spdk/crc64.h 00:04:03.309 TEST_HEADER include/spdk/dif.h 00:04:03.309 TEST_HEADER include/spdk/dma.h 00:04:03.309 TEST_HEADER include/spdk/endian.h 00:04:03.309 TEST_HEADER include/spdk/env_dpdk.h 00:04:03.309 TEST_HEADER include/spdk/env.h 00:04:03.309 TEST_HEADER include/spdk/event.h 00:04:03.309 TEST_HEADER include/spdk/fd_group.h 00:04:03.309 TEST_HEADER include/spdk/fd.h 00:04:03.309 TEST_HEADER include/spdk/file.h 00:04:03.309 TEST_HEADER include/spdk/ftl.h 00:04:03.309 CC app/iscsi_tgt/iscsi_tgt.o 00:04:03.309 TEST_HEADER include/spdk/gpt_spec.h 00:04:03.309 TEST_HEADER include/spdk/histogram_data.h 00:04:03.309 TEST_HEADER include/spdk/idxd.h 00:04:03.309 TEST_HEADER include/spdk/hexlify.h 00:04:03.309 TEST_HEADER include/spdk/init.h 00:04:03.309 TEST_HEADER include/spdk/idxd_spec.h 00:04:03.309 TEST_HEADER include/spdk/ioat.h 00:04:03.309 TEST_HEADER include/spdk/json.h 00:04:03.309 TEST_HEADER include/spdk/iscsi_spec.h 00:04:03.309 TEST_HEADER include/spdk/ioat_spec.h 00:04:03.309 TEST_HEADER include/spdk/jsonrpc.h 00:04:03.309 TEST_HEADER include/spdk/keyring.h 00:04:03.309 TEST_HEADER include/spdk/keyring_module.h 00:04:03.309 TEST_HEADER include/spdk/likely.h 00:04:03.309 TEST_HEADER include/spdk/lvol.h 00:04:03.309 TEST_HEADER include/spdk/memory.h 00:04:03.309 TEST_HEADER include/spdk/log.h 00:04:03.309 TEST_HEADER include/spdk/mmio.h 00:04:03.309 TEST_HEADER include/spdk/nbd.h 00:04:03.309 TEST_HEADER include/spdk/notify.h 00:04:03.309 TEST_HEADER include/spdk/nvme.h 00:04:03.309 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:03.309 TEST_HEADER include/spdk/nvme_intel.h 00:04:03.309 TEST_HEADER include/spdk/nvme_spec.h 00:04:03.309 TEST_HEADER include/spdk/nvme_zns.h 00:04:03.309 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:03.309 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:03.309 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:03.309 TEST_HEADER include/spdk/nvmf.h 00:04:03.309 TEST_HEADER include/spdk/nvmf_spec.h 00:04:03.309 TEST_HEADER include/spdk/nvmf_transport.h 00:04:03.309 TEST_HEADER include/spdk/opal.h 00:04:03.309 TEST_HEADER include/spdk/opal_spec.h 00:04:03.309 TEST_HEADER include/spdk/pipe.h 00:04:03.309 TEST_HEADER include/spdk/pci_ids.h 00:04:03.309 TEST_HEADER include/spdk/queue.h 00:04:03.309 TEST_HEADER include/spdk/rpc.h 00:04:03.309 TEST_HEADER include/spdk/scheduler.h 00:04:03.309 TEST_HEADER include/spdk/reduce.h 00:04:03.309 TEST_HEADER include/spdk/scsi.h 00:04:03.309 TEST_HEADER include/spdk/scsi_spec.h 00:04:03.309 TEST_HEADER include/spdk/sock.h 00:04:03.309 TEST_HEADER include/spdk/stdinc.h 00:04:03.309 TEST_HEADER include/spdk/string.h 00:04:03.309 TEST_HEADER include/spdk/thread.h 00:04:03.309 TEST_HEADER include/spdk/trace_parser.h 00:04:03.309 TEST_HEADER include/spdk/trace.h 00:04:03.309 TEST_HEADER include/spdk/tree.h 00:04:03.309 TEST_HEADER include/spdk/ublk.h 00:04:03.309 TEST_HEADER include/spdk/util.h 00:04:03.309 TEST_HEADER include/spdk/uuid.h 00:04:03.309 TEST_HEADER include/spdk/version.h 00:04:03.309 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:03.309 TEST_HEADER include/spdk/vhost.h 00:04:03.309 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:03.309 TEST_HEADER include/spdk/vmd.h 00:04:03.309 TEST_HEADER include/spdk/zipf.h 00:04:03.309 TEST_HEADER include/spdk/xor.h 00:04:03.309 CXX test/cpp_headers/accel.o 00:04:03.309 CXX test/cpp_headers/accel_module.o 00:04:03.309 CXX test/cpp_headers/assert.o 00:04:03.309 CXX test/cpp_headers/barrier.o 00:04:03.309 CXX test/cpp_headers/base64.o 00:04:03.309 CXX test/cpp_headers/bdev.o 00:04:03.309 CXX test/cpp_headers/bdev_module.o 00:04:03.309 CXX test/cpp_headers/bdev_zone.o 00:04:03.309 CXX test/cpp_headers/bit_array.o 00:04:03.309 CXX test/cpp_headers/bit_pool.o 00:04:03.309 CXX test/cpp_headers/blob_bdev.o 00:04:03.309 CXX test/cpp_headers/blobfs.o 00:04:03.309 CXX test/cpp_headers/blobfs_bdev.o 00:04:03.309 CXX test/cpp_headers/conf.o 00:04:03.309 CXX test/cpp_headers/blob.o 00:04:03.309 CXX test/cpp_headers/config.o 00:04:03.309 CXX test/cpp_headers/crc32.o 00:04:03.309 CXX test/cpp_headers/cpuset.o 00:04:03.309 CXX test/cpp_headers/crc64.o 00:04:03.309 CXX test/cpp_headers/crc16.o 00:04:03.309 CXX test/cpp_headers/dif.o 00:04:03.309 CXX test/cpp_headers/endian.o 00:04:03.309 CXX test/cpp_headers/dma.o 00:04:03.309 CXX test/cpp_headers/env_dpdk.o 00:04:03.309 CXX test/cpp_headers/env.o 00:04:03.309 CXX test/cpp_headers/event.o 00:04:03.309 CXX test/cpp_headers/fd_group.o 00:04:03.309 CC test/event/reactor/reactor.o 00:04:03.309 CXX test/cpp_headers/fd.o 00:04:03.309 CXX test/cpp_headers/file.o 00:04:03.309 CXX test/cpp_headers/ftl.o 00:04:03.309 CXX test/cpp_headers/gpt_spec.o 00:04:03.309 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:03.309 CXX test/cpp_headers/hexlify.o 00:04:03.309 CXX test/cpp_headers/histogram_data.o 00:04:03.309 CC examples/vmd/led/led.o 00:04:03.309 CXX test/cpp_headers/idxd_spec.o 00:04:03.309 CC examples/util/zipf/zipf.o 00:04:03.309 CXX test/cpp_headers/idxd.o 00:04:03.309 CXX test/cpp_headers/init.o 00:04:03.310 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:03.310 CC examples/ioat/verify/verify.o 00:04:03.310 CXX test/cpp_headers/ioat.o 00:04:03.310 CC test/nvme/aer/aer.o 00:04:03.310 CXX test/cpp_headers/ioat_spec.o 00:04:03.310 CC examples/ioat/perf/perf.o 00:04:03.310 CXX test/cpp_headers/iscsi_spec.o 00:04:03.310 CXX test/cpp_headers/json.o 00:04:03.310 CC examples/sock/hello_world/hello_sock.o 00:04:03.310 CC examples/nvme/hello_world/hello_world.o 00:04:03.310 CXX test/cpp_headers/jsonrpc.o 00:04:03.310 CC test/event/reactor_perf/reactor_perf.o 00:04:03.310 CXX test/cpp_headers/keyring.o 00:04:03.310 CC examples/nvme/reconnect/reconnect.o 00:04:03.310 CXX test/cpp_headers/keyring_module.o 00:04:03.310 CC test/event/event_perf/event_perf.o 00:04:03.310 CXX test/cpp_headers/likely.o 00:04:03.310 CC examples/nvme/hotplug/hotplug.o 00:04:03.310 CXX test/cpp_headers/log.o 00:04:03.310 CXX test/cpp_headers/memory.o 00:04:03.310 CXX test/cpp_headers/lvol.o 00:04:03.310 CC examples/idxd/perf/perf.o 00:04:03.310 CC examples/nvme/arbitration/arbitration.o 00:04:03.310 CXX test/cpp_headers/mmio.o 00:04:03.310 CXX test/cpp_headers/nbd.o 00:04:03.310 CXX test/cpp_headers/nvme.o 00:04:03.310 CC examples/nvme/abort/abort.o 00:04:03.310 CXX test/cpp_headers/notify.o 00:04:03.310 CC test/nvme/reserve/reserve.o 00:04:03.310 CXX test/cpp_headers/nvme_intel.o 00:04:03.310 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:03.310 CC examples/accel/perf/accel_perf.o 00:04:03.310 CXX test/cpp_headers/nvme_ocssd.o 00:04:03.310 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:03.310 CXX test/cpp_headers/nvme_spec.o 00:04:03.310 CC test/nvme/err_injection/err_injection.o 00:04:03.310 CC test/nvme/e2edp/nvme_dp.o 00:04:03.310 CC test/nvme/startup/startup.o 00:04:03.310 CXX test/cpp_headers/nvme_zns.o 00:04:03.310 CC test/nvme/sgl/sgl.o 00:04:03.310 CC test/nvme/boot_partition/boot_partition.o 00:04:03.310 CXX test/cpp_headers/nvmf_cmd.o 00:04:03.310 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:03.310 CC examples/vmd/lsvmd/lsvmd.o 00:04:03.310 CC test/app/stub/stub.o 00:04:03.310 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:03.310 CXX test/cpp_headers/nvmf.o 00:04:03.310 CC test/nvme/reset/reset.o 00:04:03.310 CXX test/cpp_headers/nvmf_spec.o 00:04:03.310 CC test/nvme/fused_ordering/fused_ordering.o 00:04:03.310 CC test/nvme/compliance/nvme_compliance.o 00:04:03.310 CC test/app/histogram_perf/histogram_perf.o 00:04:03.310 CC test/nvme/simple_copy/simple_copy.o 00:04:03.310 CXX test/cpp_headers/nvmf_transport.o 00:04:03.310 CC test/nvme/connect_stress/connect_stress.o 00:04:03.310 CC test/thread/poller_perf/poller_perf.o 00:04:03.310 CXX test/cpp_headers/opal.o 00:04:03.310 CC test/app/jsoncat/jsoncat.o 00:04:03.310 CXX test/cpp_headers/opal_spec.o 00:04:03.310 CXX test/cpp_headers/pci_ids.o 00:04:03.310 CC examples/blob/cli/blobcli.o 00:04:03.310 CXX test/cpp_headers/queue.o 00:04:03.310 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:03.310 CC test/event/app_repeat/app_repeat.o 00:04:03.310 CXX test/cpp_headers/reduce.o 00:04:03.310 CXX test/cpp_headers/pipe.o 00:04:03.310 CXX test/cpp_headers/rpc.o 00:04:03.310 CC examples/bdev/hello_world/hello_bdev.o 00:04:03.310 CXX test/cpp_headers/scheduler.o 00:04:03.310 CC examples/blob/hello_world/hello_blob.o 00:04:03.310 CC test/env/vtophys/vtophys.o 00:04:03.310 CC app/fio/nvme/fio_plugin.o 00:04:03.310 CC test/env/memory/memory_ut.o 00:04:03.310 CC test/nvme/fdp/fdp.o 00:04:03.310 CC test/nvme/cuse/cuse.o 00:04:03.310 CC test/event/scheduler/scheduler.o 00:04:03.310 CC test/nvme/overhead/overhead.o 00:04:03.310 CC test/env/pci/pci_ut.o 00:04:03.310 CC test/dma/test_dma/test_dma.o 00:04:03.310 CC test/bdev/bdevio/bdevio.o 00:04:03.310 CC examples/bdev/bdevperf/bdevperf.o 00:04:03.579 CC test/blobfs/mkfs/mkfs.o 00:04:03.579 CC test/accel/dif/dif.o 00:04:03.579 CC examples/thread/thread/thread_ex.o 00:04:03.579 CC examples/nvmf/nvmf/nvmf.o 00:04:03.579 CC app/fio/bdev/fio_plugin.o 00:04:03.579 CC test/app/bdev_svc/bdev_svc.o 00:04:03.579 LINK spdk_lspci 00:04:03.579 CXX test/cpp_headers/scsi.o 00:04:03.579 LINK vhost 00:04:03.579 LINK nvmf_tgt 00:04:03.579 LINK spdk_nvme_discover 00:04:03.845 LINK rpc_client_test 00:04:03.845 CC test/env/mem_callbacks/mem_callbacks.o 00:04:03.845 LINK spdk_tgt 00:04:03.845 LINK interrupt_tgt 00:04:03.845 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:03.845 LINK spdk_trace_record 00:04:03.845 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:03.845 LINK iscsi_tgt 00:04:03.845 CC test/lvol/esnap/esnap.o 00:04:03.845 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:03.845 LINK reactor_perf 00:04:03.845 LINK led 00:04:03.845 LINK pmr_persistence 00:04:03.845 LINK reactor 00:04:03.845 LINK zipf 00:04:03.845 LINK event_perf 00:04:03.845 LINK jsoncat 00:04:03.845 LINK lsvmd 00:04:03.845 LINK poller_perf 00:04:03.845 LINK env_dpdk_post_init 00:04:04.103 LINK ioat_perf 00:04:04.103 LINK boot_partition 00:04:04.103 LINK hello_world 00:04:04.103 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:04.103 LINK histogram_perf 00:04:04.103 LINK verify 00:04:04.103 LINK cmb_copy 00:04:04.103 LINK app_repeat 00:04:04.103 LINK stub 00:04:04.103 LINK vtophys 00:04:04.103 LINK startup 00:04:04.103 LINK fused_ordering 00:04:04.103 LINK doorbell_aers 00:04:04.103 LINK reserve 00:04:04.103 LINK connect_stress 00:04:04.103 LINK err_injection 00:04:04.103 CXX test/cpp_headers/scsi_spec.o 00:04:04.103 LINK hotplug 00:04:04.103 LINK hello_bdev 00:04:04.103 LINK hello_sock 00:04:04.103 LINK sgl 00:04:04.103 CXX test/cpp_headers/sock.o 00:04:04.103 LINK spdk_dd 00:04:04.103 LINK bdev_svc 00:04:04.103 CXX test/cpp_headers/stdinc.o 00:04:04.103 LINK nvme_dp 00:04:04.103 CXX test/cpp_headers/string.o 00:04:04.103 CXX test/cpp_headers/thread.o 00:04:04.103 CXX test/cpp_headers/trace.o 00:04:04.103 LINK hello_blob 00:04:04.103 CXX test/cpp_headers/trace_parser.o 00:04:04.103 CXX test/cpp_headers/tree.o 00:04:04.103 CXX test/cpp_headers/ublk.o 00:04:04.103 LINK simple_copy 00:04:04.103 CXX test/cpp_headers/util.o 00:04:04.103 CXX test/cpp_headers/uuid.o 00:04:04.103 CXX test/cpp_headers/version.o 00:04:04.103 CXX test/cpp_headers/vfio_user_pci.o 00:04:04.103 CXX test/cpp_headers/vfio_user_spec.o 00:04:04.103 CXX test/cpp_headers/vhost.o 00:04:04.103 CXX test/cpp_headers/vmd.o 00:04:04.103 CXX test/cpp_headers/xor.o 00:04:04.103 LINK mkfs 00:04:04.103 CXX test/cpp_headers/zipf.o 00:04:04.103 LINK reset 00:04:04.103 LINK scheduler 00:04:04.103 LINK thread 00:04:04.362 LINK idxd_perf 00:04:04.362 LINK nvme_compliance 00:04:04.362 LINK abort 00:04:04.362 LINK aer 00:04:04.362 LINK reconnect 00:04:04.362 LINK nvmf 00:04:04.362 LINK overhead 00:04:04.362 LINK arbitration 00:04:04.362 LINK fdp 00:04:04.362 LINK spdk_trace 00:04:04.362 LINK test_dma 00:04:04.362 LINK pci_ut 00:04:04.362 LINK bdevio 00:04:04.362 LINK nvme_manage 00:04:04.622 LINK spdk_nvme 00:04:04.622 LINK dif 00:04:04.622 LINK accel_perf 00:04:04.622 LINK blobcli 00:04:04.622 LINK spdk_bdev 00:04:04.622 LINK vhost_fuzz 00:04:04.622 LINK nvme_fuzz 00:04:04.622 LINK spdk_nvme_identify 00:04:04.622 LINK spdk_top 00:04:04.622 LINK spdk_nvme_perf 00:04:04.622 LINK bdevperf 00:04:04.622 LINK mem_callbacks 00:04:04.883 LINK memory_ut 00:04:05.144 LINK cuse 00:04:05.715 LINK iscsi_fuzz 00:04:08.290 LINK esnap 00:04:08.591 00:04:08.591 real 0m49.080s 00:04:08.591 user 6m33.982s 00:04:08.591 sys 4m29.867s 00:04:08.591 13:32:01 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:04:08.591 13:32:01 make -- common/autotest_common.sh@10 -- $ set +x 00:04:08.591 ************************************ 00:04:08.591 END TEST make 00:04:08.591 ************************************ 00:04:08.591 13:32:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:08.591 13:32:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:08.591 13:32:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:08.591 13:32:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.591 13:32:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:08.591 13:32:01 -- pm/common@44 -- $ pid=1776257 00:04:08.591 13:32:01 -- pm/common@50 -- $ kill -TERM 1776257 00:04:08.591 13:32:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.591 13:32:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:08.591 13:32:01 -- pm/common@44 -- $ pid=1776258 00:04:08.591 13:32:01 -- pm/common@50 -- $ kill -TERM 1776258 00:04:08.591 13:32:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.591 13:32:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:08.591 13:32:01 -- pm/common@44 -- $ pid=1776260 00:04:08.591 13:32:01 -- pm/common@50 -- $ kill -TERM 1776260 00:04:08.591 13:32:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.591 13:32:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:08.591 13:32:01 -- pm/common@44 -- $ pid=1776284 00:04:08.591 13:32:01 -- pm/common@50 -- $ sudo -E kill -TERM 1776284 00:04:08.591 13:32:01 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:08.850 13:32:01 -- nvmf/common.sh@7 -- # uname -s 00:04:08.850 13:32:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:08.850 13:32:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:08.851 13:32:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:08.851 13:32:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:08.851 13:32:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:08.851 13:32:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:08.851 13:32:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:08.851 13:32:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:08.851 13:32:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:08.851 13:32:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:08.851 13:32:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:08.851 13:32:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:08.851 13:32:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:08.851 13:32:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:08.851 13:32:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:08.851 13:32:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:08.851 13:32:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:08.851 13:32:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:08.851 13:32:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.851 13:32:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.851 13:32:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.851 13:32:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.851 13:32:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.851 13:32:01 -- paths/export.sh@5 -- # export PATH 00:04:08.851 13:32:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.851 13:32:01 -- nvmf/common.sh@47 -- # : 0 00:04:08.851 13:32:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:08.851 13:32:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:08.851 13:32:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:08.851 13:32:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:08.851 13:32:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:08.851 13:32:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:08.851 13:32:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:08.851 13:32:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:08.851 13:32:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:08.851 13:32:01 -- spdk/autotest.sh@32 -- # uname -s 00:04:08.851 13:32:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:08.851 13:32:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:08.851 13:32:01 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:08.851 13:32:01 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:08.851 13:32:01 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:04:08.851 13:32:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:08.851 13:32:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:08.851 13:32:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:08.851 13:32:01 -- spdk/autotest.sh@48 -- # udevadm_pid=1838446 00:04:08.851 13:32:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:08.851 13:32:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:08.851 13:32:01 -- pm/common@17 -- # local monitor 00:04:08.851 13:32:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.851 13:32:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.851 13:32:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.851 13:32:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.851 13:32:01 -- pm/common@21 -- # date +%s 00:04:08.851 13:32:01 -- pm/common@25 -- # sleep 1 00:04:08.851 13:32:01 -- pm/common@21 -- # date +%s 00:04:08.851 13:32:01 -- pm/common@21 -- # date +%s 00:04:08.851 13:32:01 -- pm/common@21 -- # date +%s 00:04:08.851 13:32:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105521 00:04:08.851 13:32:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105521 00:04:08.851 13:32:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105521 00:04:08.851 13:32:01 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105521 00:04:08.851 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105521_collect-vmstat.pm.log 00:04:08.851 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105521_collect-cpu-load.pm.log 00:04:08.851 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105521_collect-cpu-temp.pm.log 00:04:08.851 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105521_collect-bmc-pm.bmc.pm.log 00:04:09.789 13:32:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:09.789 13:32:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:09.789 13:32:02 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:09.789 13:32:02 -- common/autotest_common.sh@10 -- # set +x 00:04:09.789 13:32:02 -- spdk/autotest.sh@59 -- # create_test_list 00:04:09.789 13:32:02 -- common/autotest_common.sh@747 -- # xtrace_disable 00:04:09.789 13:32:02 -- common/autotest_common.sh@10 -- # set +x 00:04:09.789 13:32:02 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:04:09.789 13:32:02 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:09.789 13:32:02 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:09.789 13:32:02 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:04:09.790 13:32:02 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:09.790 13:32:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:09.790 13:32:02 -- common/autotest_common.sh@1454 -- # uname 00:04:09.790 13:32:02 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:04:09.790 13:32:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:09.790 13:32:02 -- common/autotest_common.sh@1474 -- # uname 00:04:09.790 13:32:02 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:04:09.790 13:32:02 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:09.790 13:32:02 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:09.790 13:32:02 -- spdk/autotest.sh@72 -- # hash lcov 00:04:09.790 13:32:02 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:09.790 13:32:02 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:09.790 --rc lcov_branch_coverage=1 00:04:09.790 --rc lcov_function_coverage=1 00:04:09.790 --rc genhtml_branch_coverage=1 00:04:09.790 --rc genhtml_function_coverage=1 00:04:09.790 --rc genhtml_legend=1 00:04:09.790 --rc geninfo_all_blocks=1 00:04:09.790 ' 00:04:09.790 13:32:02 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:09.790 --rc lcov_branch_coverage=1 00:04:09.790 --rc lcov_function_coverage=1 00:04:09.790 --rc genhtml_branch_coverage=1 00:04:09.790 --rc genhtml_function_coverage=1 00:04:09.790 --rc genhtml_legend=1 00:04:09.790 --rc geninfo_all_blocks=1 00:04:09.790 ' 00:04:09.790 13:32:02 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:09.790 --rc lcov_branch_coverage=1 00:04:09.790 --rc lcov_function_coverage=1 00:04:09.790 --rc genhtml_branch_coverage=1 00:04:09.790 --rc genhtml_function_coverage=1 00:04:09.790 --rc genhtml_legend=1 00:04:09.790 --rc geninfo_all_blocks=1 00:04:09.790 --no-external' 00:04:09.790 13:32:02 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:09.790 --rc lcov_branch_coverage=1 00:04:09.790 --rc lcov_function_coverage=1 00:04:09.790 --rc genhtml_branch_coverage=1 00:04:09.790 --rc genhtml_function_coverage=1 00:04:09.790 --rc genhtml_legend=1 00:04:09.790 --rc geninfo_all_blocks=1 00:04:09.790 --no-external' 00:04:09.790 13:32:02 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:10.049 lcov: LCOV version 1.14 00:04:10.049 13:32:02 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:04:22.281 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:22.281 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:37.194 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:37.194 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:37.195 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:37.195 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:37.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:37.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:37.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:37.196 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:37.196 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:37.767 13:32:30 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:37.767 13:32:30 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:37.767 13:32:30 -- common/autotest_common.sh@10 -- # set +x 00:04:37.767 13:32:30 -- spdk/autotest.sh@91 -- # rm -f 00:04:37.767 13:32:30 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.068 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:41.068 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:41.068 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:41.068 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:41.328 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:41.328 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:41.328 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:41.329 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:41.329 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:41.329 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:41.329 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:41.329 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:41.329 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:41.329 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:41.329 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:41.590 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:41.590 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:41.590 13:32:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:41.590 13:32:34 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:41.590 13:32:34 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:41.590 13:32:34 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:41.590 13:32:34 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:41.590 13:32:34 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:41.590 13:32:34 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:41.590 13:32:34 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.590 13:32:34 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:41.590 13:32:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:41.590 13:32:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:41.590 13:32:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:41.590 13:32:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:41.590 13:32:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:41.590 13:32:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:41.590 No valid GPT data, bailing 00:04:41.590 13:32:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.590 13:32:34 -- scripts/common.sh@391 -- # pt= 00:04:41.590 13:32:34 -- scripts/common.sh@392 -- # return 1 00:04:41.590 13:32:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:41.590 1+0 records in 00:04:41.590 1+0 records out 00:04:41.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456794 s, 230 MB/s 00:04:41.590 13:32:34 -- spdk/autotest.sh@118 -- # sync 00:04:41.590 13:32:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:41.590 13:32:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:41.590 13:32:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:49.731 13:32:42 -- spdk/autotest.sh@124 -- # uname -s 00:04:49.731 13:32:42 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:49.731 13:32:42 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:49.731 13:32:42 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:49.731 13:32:42 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:49.732 13:32:42 -- common/autotest_common.sh@10 -- # set +x 00:04:49.732 ************************************ 00:04:49.732 START TEST setup.sh 00:04:49.732 ************************************ 00:04:49.732 13:32:42 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:49.732 * Looking for test storage... 00:04:49.732 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:49.732 13:32:42 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:49.732 13:32:42 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:49.732 13:32:42 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:49.732 13:32:42 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:49.732 13:32:42 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:49.732 13:32:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:49.732 ************************************ 00:04:49.732 START TEST acl 00:04:49.732 ************************************ 00:04:49.732 13:32:42 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:49.732 * Looking for test storage... 00:04:49.732 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:49.732 13:32:42 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:49.732 13:32:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:49.732 13:32:42 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:49.732 13:32:42 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:49.732 13:32:42 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:49.732 13:32:42 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:49.732 13:32:42 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:49.732 13:32:42 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.732 13:32:42 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:49.732 13:32:42 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:49.732 13:32:42 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:49.732 13:32:42 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:49.732 13:32:42 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:49.732 13:32:42 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:49.732 13:32:42 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.732 13:32:42 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.939 13:32:46 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:53.939 13:32:46 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:53.939 13:32:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:53.939 13:32:46 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:53.939 13:32:46 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.939 13:32:46 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:57.240 Hugepages 00:04:57.240 node hugesize free / total 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.240 00:04:57.240 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.240 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:57.241 13:32:49 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:57.241 13:32:49 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:57.241 13:32:49 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:57.241 13:32:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:57.241 ************************************ 00:04:57.241 START TEST denied 00:04:57.241 ************************************ 00:04:57.241 13:32:49 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:04:57.241 13:32:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:57.241 13:32:49 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:57.241 13:32:49 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:57.241 13:32:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.241 13:32:49 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:00.542 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:05:00.542 13:32:53 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:05:00.542 13:32:53 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:00.542 13:32:53 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:00.542 13:32:53 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:05:00.542 13:32:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:05:00.542 13:32:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:00.542 13:32:53 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:00.542 13:32:53 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:00.542 13:32:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.542 13:32:53 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.829 00:05:05.829 real 0m8.114s 00:05:05.829 user 0m2.626s 00:05:05.829 sys 0m4.827s 00:05:05.829 13:32:57 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:05.829 13:32:57 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:05.829 ************************************ 00:05:05.829 END TEST denied 00:05:05.829 ************************************ 00:05:05.829 13:32:57 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:05.829 13:32:57 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:05.829 13:32:57 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:05.829 13:32:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:05.829 ************************************ 00:05:05.829 START TEST allowed 00:05:05.830 ************************************ 00:05:05.830 13:32:58 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:05:05.830 13:32:58 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:05:05.830 13:32:58 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:05.830 13:32:58 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:05:05.830 13:32:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.830 13:32:58 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:11.150 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:11.150 13:33:03 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:11.150 13:33:03 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:11.150 13:33:03 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:11.150 13:33:03 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.150 13:33:03 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.694 00:05:13.694 real 0m8.580s 00:05:13.694 user 0m2.367s 00:05:13.694 sys 0m4.456s 00:05:13.694 13:33:06 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.694 13:33:06 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:13.694 ************************************ 00:05:13.695 END TEST allowed 00:05:13.695 ************************************ 00:05:13.957 00:05:13.957 real 0m24.272s 00:05:13.957 user 0m7.792s 00:05:13.957 sys 0m14.253s 00:05:13.957 13:33:06 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.957 13:33:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:13.957 ************************************ 00:05:13.957 END TEST acl 00:05:13.957 ************************************ 00:05:13.957 13:33:06 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:05:13.957 13:33:06 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.957 13:33:06 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.957 13:33:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:13.957 ************************************ 00:05:13.957 START TEST hugepages 00:05:13.957 ************************************ 00:05:13.957 13:33:06 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:05:13.957 * Looking for test storage... 00:05:13.957 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 108203396 kB' 'MemAvailable: 111393544 kB' 'Buffers: 4132 kB' 'Cached: 9370204 kB' 'SwapCached: 0 kB' 'Active: 6367344 kB' 'Inactive: 3495736 kB' 'Active(anon): 5978680 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492112 kB' 'Mapped: 215436 kB' 'Shmem: 5489936 kB' 'KReclaimable: 260156 kB' 'Slab: 966532 kB' 'SReclaimable: 260156 kB' 'SUnreclaim: 706376 kB' 'KernelStack: 27584 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 7465676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235644 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.957 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.958 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:13.959 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:14.221 13:33:06 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:14.221 13:33:06 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:14.221 13:33:06 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:14.221 13:33:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.221 ************************************ 00:05:14.221 START TEST default_setup 00:05:14.221 ************************************ 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.221 13:33:06 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:17.526 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:17.526 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:17.790 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:17.790 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110361920 kB' 'MemAvailable: 113552020 kB' 'Buffers: 4132 kB' 'Cached: 9370324 kB' 'SwapCached: 0 kB' 'Active: 6377752 kB' 'Inactive: 3495736 kB' 'Active(anon): 5989088 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502060 kB' 'Mapped: 214848 kB' 'Shmem: 5490056 kB' 'KReclaimable: 260060 kB' 'Slab: 963848 kB' 'SReclaimable: 260060 kB' 'SUnreclaim: 703788 kB' 'KernelStack: 27344 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7478996 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235624 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.790 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.791 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110361400 kB' 'MemAvailable: 113551500 kB' 'Buffers: 4132 kB' 'Cached: 9370324 kB' 'SwapCached: 0 kB' 'Active: 6378432 kB' 'Inactive: 3495736 kB' 'Active(anon): 5989768 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502724 kB' 'Mapped: 214800 kB' 'Shmem: 5490056 kB' 'KReclaimable: 260060 kB' 'Slab: 963848 kB' 'SReclaimable: 260060 kB' 'SUnreclaim: 703788 kB' 'KernelStack: 27536 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7479012 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235608 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.792 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:17.793 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110360364 kB' 'MemAvailable: 113550464 kB' 'Buffers: 4132 kB' 'Cached: 9370344 kB' 'SwapCached: 0 kB' 'Active: 6377672 kB' 'Inactive: 3495736 kB' 'Active(anon): 5989008 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502432 kB' 'Mapped: 214808 kB' 'Shmem: 5490076 kB' 'KReclaimable: 260060 kB' 'Slab: 963952 kB' 'SReclaimable: 260060 kB' 'SUnreclaim: 703892 kB' 'KernelStack: 27424 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7479896 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235640 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.794 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:17.795 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:18.061 nr_hugepages=1024 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.061 resv_hugepages=0 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.061 surplus_hugepages=0 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.061 anon_hugepages=0 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110357952 kB' 'MemAvailable: 113548052 kB' 'Buffers: 4132 kB' 'Cached: 9370368 kB' 'SwapCached: 0 kB' 'Active: 6379728 kB' 'Inactive: 3495736 kB' 'Active(anon): 5991064 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504392 kB' 'Mapped: 215312 kB' 'Shmem: 5490100 kB' 'KReclaimable: 260060 kB' 'Slab: 963952 kB' 'SReclaimable: 260060 kB' 'SUnreclaim: 703892 kB' 'KernelStack: 27264 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7481244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235528 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.061 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.062 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60075288 kB' 'MemUsed: 5583720 kB' 'SwapCached: 0 kB' 'Active: 1223508 kB' 'Inactive: 204328 kB' 'Active(anon): 1055452 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1218552 kB' 'Mapped: 91308 kB' 'AnonPages: 212584 kB' 'Shmem: 846168 kB' 'KernelStack: 14712 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157800 kB' 'Slab: 501544 kB' 'SReclaimable: 157800 kB' 'SUnreclaim: 343744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.063 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:18.064 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:18.065 node0=1024 expecting 1024 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:18.065 00:05:18.065 real 0m3.851s 00:05:18.065 user 0m1.471s 00:05:18.065 sys 0m2.377s 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:18.065 13:33:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:18.065 ************************************ 00:05:18.065 END TEST default_setup 00:05:18.065 ************************************ 00:05:18.065 13:33:10 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:18.065 13:33:10 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:18.065 13:33:10 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:18.065 13:33:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:18.065 ************************************ 00:05:18.065 START TEST per_node_1G_alloc 00:05:18.065 ************************************ 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.065 13:33:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:21.366 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:21.366 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.366 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110358852 kB' 'MemAvailable: 113548948 kB' 'Buffers: 4132 kB' 'Cached: 9370480 kB' 'SwapCached: 0 kB' 'Active: 6374504 kB' 'Inactive: 3495736 kB' 'Active(anon): 5985840 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498948 kB' 'Mapped: 213764 kB' 'Shmem: 5490212 kB' 'KReclaimable: 260052 kB' 'Slab: 963456 kB' 'SReclaimable: 260052 kB' 'SUnreclaim: 703404 kB' 'KernelStack: 27168 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7457808 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235512 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.367 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.631 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.631 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.631 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.631 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.631 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.631 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.631 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.631 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.631 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.631 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.631 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.632 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110359908 kB' 'MemAvailable: 113550004 kB' 'Buffers: 4132 kB' 'Cached: 9370484 kB' 'SwapCached: 0 kB' 'Active: 6374152 kB' 'Inactive: 3495736 kB' 'Active(anon): 5985488 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498664 kB' 'Mapped: 213756 kB' 'Shmem: 5490216 kB' 'KReclaimable: 260052 kB' 'Slab: 963488 kB' 'SReclaimable: 260052 kB' 'SUnreclaim: 703436 kB' 'KernelStack: 27152 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7457824 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235512 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.633 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.634 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110360604 kB' 'MemAvailable: 113550700 kB' 'Buffers: 4132 kB' 'Cached: 9370504 kB' 'SwapCached: 0 kB' 'Active: 6374136 kB' 'Inactive: 3495736 kB' 'Active(anon): 5985472 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498632 kB' 'Mapped: 213756 kB' 'Shmem: 5490236 kB' 'KReclaimable: 260052 kB' 'Slab: 963516 kB' 'SReclaimable: 260052 kB' 'SUnreclaim: 703464 kB' 'KernelStack: 27152 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7457848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235512 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.635 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.636 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.637 nr_hugepages=1024 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.637 resv_hugepages=0 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.637 surplus_hugepages=0 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.637 anon_hugepages=0 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110360352 kB' 'MemAvailable: 113550448 kB' 'Buffers: 4132 kB' 'Cached: 9370544 kB' 'SwapCached: 0 kB' 'Active: 6373832 kB' 'Inactive: 3495736 kB' 'Active(anon): 5985168 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498216 kB' 'Mapped: 213756 kB' 'Shmem: 5490276 kB' 'KReclaimable: 260052 kB' 'Slab: 963516 kB' 'SReclaimable: 260052 kB' 'SUnreclaim: 703464 kB' 'KernelStack: 27136 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7457868 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235512 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.637 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.638 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61113764 kB' 'MemUsed: 4545244 kB' 'SwapCached: 0 kB' 'Active: 1221432 kB' 'Inactive: 204328 kB' 'Active(anon): 1053376 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1218732 kB' 'Mapped: 90496 kB' 'AnonPages: 210316 kB' 'Shmem: 846348 kB' 'KernelStack: 14696 kB' 'PageTables: 3340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157800 kB' 'Slab: 501696 kB' 'SReclaimable: 157800 kB' 'SUnreclaim: 343896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.639 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.640 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679844 kB' 'MemFree: 49245832 kB' 'MemUsed: 11434012 kB' 'SwapCached: 0 kB' 'Active: 5152428 kB' 'Inactive: 3291408 kB' 'Active(anon): 4931820 kB' 'Inactive(anon): 0 kB' 'Active(file): 220608 kB' 'Inactive(file): 3291408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8155968 kB' 'Mapped: 123260 kB' 'AnonPages: 287900 kB' 'Shmem: 4643952 kB' 'KernelStack: 12440 kB' 'PageTables: 4804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102252 kB' 'Slab: 461820 kB' 'SReclaimable: 102252 kB' 'SUnreclaim: 359568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.641 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:21.642 node0=512 expecting 512 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:21.642 node1=512 expecting 512 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:21.642 00:05:21.642 real 0m3.614s 00:05:21.642 user 0m1.457s 00:05:21.642 sys 0m2.218s 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:21.642 13:33:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:21.642 ************************************ 00:05:21.642 END TEST per_node_1G_alloc 00:05:21.642 ************************************ 00:05:21.642 13:33:14 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:21.642 13:33:14 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:21.642 13:33:14 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:21.642 13:33:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:21.902 ************************************ 00:05:21.902 START TEST even_2G_alloc 00:05:21.902 ************************************ 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.902 13:33:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:25.204 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:25.204 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110419456 kB' 'MemAvailable: 113609552 kB' 'Buffers: 4132 kB' 'Cached: 9370664 kB' 'SwapCached: 0 kB' 'Active: 6375992 kB' 'Inactive: 3495736 kB' 'Active(anon): 5987328 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499692 kB' 'Mapped: 213880 kB' 'Shmem: 5490396 kB' 'KReclaimable: 260052 kB' 'Slab: 962152 kB' 'SReclaimable: 260052 kB' 'SUnreclaim: 702100 kB' 'KernelStack: 27184 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7458924 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235480 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.204 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.205 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110420228 kB' 'MemAvailable: 113610324 kB' 'Buffers: 4132 kB' 'Cached: 9370664 kB' 'SwapCached: 0 kB' 'Active: 6376264 kB' 'Inactive: 3495736 kB' 'Active(anon): 5987600 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500012 kB' 'Mapped: 213880 kB' 'Shmem: 5490396 kB' 'KReclaimable: 260052 kB' 'Slab: 962148 kB' 'SReclaimable: 260052 kB' 'SUnreclaim: 702096 kB' 'KernelStack: 27152 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7458940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235432 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.206 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110420960 kB' 'MemAvailable: 113611056 kB' 'Buffers: 4132 kB' 'Cached: 9370684 kB' 'SwapCached: 0 kB' 'Active: 6375184 kB' 'Inactive: 3495736 kB' 'Active(anon): 5986520 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499340 kB' 'Mapped: 213788 kB' 'Shmem: 5490416 kB' 'KReclaimable: 260052 kB' 'Slab: 962120 kB' 'SReclaimable: 260052 kB' 'SUnreclaim: 702068 kB' 'KernelStack: 27152 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7458960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235432 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.207 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.208 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:25.209 nr_hugepages=1024 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.209 resv_hugepages=0 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.209 surplus_hugepages=0 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.209 anon_hugepages=0 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.209 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110421576 kB' 'MemAvailable: 113611672 kB' 'Buffers: 4132 kB' 'Cached: 9370724 kB' 'SwapCached: 0 kB' 'Active: 6374860 kB' 'Inactive: 3495736 kB' 'Active(anon): 5986196 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498936 kB' 'Mapped: 213788 kB' 'Shmem: 5490456 kB' 'KReclaimable: 260052 kB' 'Slab: 962120 kB' 'SReclaimable: 260052 kB' 'SUnreclaim: 702068 kB' 'KernelStack: 27136 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7458984 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235432 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.210 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61160368 kB' 'MemUsed: 4498640 kB' 'SwapCached: 0 kB' 'Active: 1221732 kB' 'Inactive: 204328 kB' 'Active(anon): 1053676 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1218852 kB' 'Mapped: 90496 kB' 'AnonPages: 210436 kB' 'Shmem: 846468 kB' 'KernelStack: 14696 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157800 kB' 'Slab: 500112 kB' 'SReclaimable: 157800 kB' 'SUnreclaim: 342312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.211 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.212 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679844 kB' 'MemFree: 49258968 kB' 'MemUsed: 11420876 kB' 'SwapCached: 0 kB' 'Active: 5155088 kB' 'Inactive: 3291408 kB' 'Active(anon): 4934480 kB' 'Inactive(anon): 0 kB' 'Active(file): 220608 kB' 'Inactive(file): 3291408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8156004 kB' 'Mapped: 123280 kB' 'AnonPages: 290604 kB' 'Shmem: 4643988 kB' 'KernelStack: 12488 kB' 'PageTables: 5000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102252 kB' 'Slab: 462008 kB' 'SReclaimable: 102252 kB' 'SUnreclaim: 359756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.213 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:25.214 node0=512 expecting 512 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:25.214 node1=512 expecting 512 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:25.214 00:05:25.214 real 0m3.555s 00:05:25.214 user 0m1.446s 00:05:25.214 sys 0m2.173s 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:25.214 13:33:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:25.214 ************************************ 00:05:25.214 END TEST even_2G_alloc 00:05:25.214 ************************************ 00:05:25.475 13:33:18 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:25.475 13:33:18 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:25.475 13:33:18 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:25.475 13:33:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:25.475 ************************************ 00:05:25.475 START TEST odd_alloc 00:05:25.475 ************************************ 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.475 13:33:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:28.778 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:28.778 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110395152 kB' 'MemAvailable: 113585244 kB' 'Buffers: 4132 kB' 'Cached: 9370840 kB' 'SwapCached: 0 kB' 'Active: 6379788 kB' 'Inactive: 3495736 kB' 'Active(anon): 5991124 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503792 kB' 'Mapped: 214716 kB' 'Shmem: 5490572 kB' 'KReclaimable: 260044 kB' 'Slab: 961868 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701824 kB' 'KernelStack: 27152 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 7465868 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235388 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.778 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110397420 kB' 'MemAvailable: 113587512 kB' 'Buffers: 4132 kB' 'Cached: 9370844 kB' 'SwapCached: 0 kB' 'Active: 6374452 kB' 'Inactive: 3495736 kB' 'Active(anon): 5985788 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498468 kB' 'Mapped: 214156 kB' 'Shmem: 5490576 kB' 'KReclaimable: 260044 kB' 'Slab: 961836 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701792 kB' 'KernelStack: 27152 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 7459764 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235384 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.779 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.780 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:28.781 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110398396 kB' 'MemAvailable: 113588488 kB' 'Buffers: 4132 kB' 'Cached: 9370860 kB' 'SwapCached: 0 kB' 'Active: 6374412 kB' 'Inactive: 3495736 kB' 'Active(anon): 5985748 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498400 kB' 'Mapped: 214156 kB' 'Shmem: 5490592 kB' 'KReclaimable: 260044 kB' 'Slab: 961880 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701836 kB' 'KernelStack: 27152 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 7477932 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235416 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.046 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.047 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:29.048 nr_hugepages=1025 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:29.048 resv_hugepages=0 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:29.048 surplus_hugepages=0 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:29.048 anon_hugepages=0 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110399932 kB' 'MemAvailable: 113590024 kB' 'Buffers: 4132 kB' 'Cached: 9370888 kB' 'SwapCached: 0 kB' 'Active: 6374360 kB' 'Inactive: 3495736 kB' 'Active(anon): 5985696 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498316 kB' 'Mapped: 213792 kB' 'Shmem: 5490620 kB' 'KReclaimable: 260044 kB' 'Slab: 961840 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701796 kB' 'KernelStack: 27120 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 7459436 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235384 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.048 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.049 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61139044 kB' 'MemUsed: 4519964 kB' 'SwapCached: 0 kB' 'Active: 1218904 kB' 'Inactive: 204328 kB' 'Active(anon): 1050848 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1218976 kB' 'Mapped: 90496 kB' 'AnonPages: 207480 kB' 'Shmem: 846592 kB' 'KernelStack: 14680 kB' 'PageTables: 3300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157800 kB' 'Slab: 499900 kB' 'SReclaimable: 157800 kB' 'SUnreclaim: 342100 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.050 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.051 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679844 kB' 'MemFree: 49260940 kB' 'MemUsed: 11418904 kB' 'SwapCached: 0 kB' 'Active: 5155416 kB' 'Inactive: 3291408 kB' 'Active(anon): 4934808 kB' 'Inactive(anon): 0 kB' 'Active(file): 220608 kB' 'Inactive(file): 3291408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8156064 kB' 'Mapped: 123296 kB' 'AnonPages: 290780 kB' 'Shmem: 4644048 kB' 'KernelStack: 12440 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102244 kB' 'Slab: 461940 kB' 'SReclaimable: 102244 kB' 'SUnreclaim: 359696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.052 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:29.053 node0=512 expecting 513 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:29.053 node1=513 expecting 512 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:29.053 00:05:29.053 real 0m3.621s 00:05:29.053 user 0m1.431s 00:05:29.053 sys 0m2.253s 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:29.053 13:33:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:29.053 ************************************ 00:05:29.053 END TEST odd_alloc 00:05:29.053 ************************************ 00:05:29.053 13:33:21 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:29.053 13:33:21 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:29.053 13:33:21 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:29.053 13:33:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:29.053 ************************************ 00:05:29.053 START TEST custom_alloc 00:05:29.053 ************************************ 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:29.053 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.054 13:33:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:32.359 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:32.359 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:32.359 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109374372 kB' 'MemAvailable: 112564464 kB' 'Buffers: 4132 kB' 'Cached: 9371016 kB' 'SwapCached: 0 kB' 'Active: 6375612 kB' 'Inactive: 3495736 kB' 'Active(anon): 5986948 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499556 kB' 'Mapped: 213908 kB' 'Shmem: 5490748 kB' 'KReclaimable: 260044 kB' 'Slab: 961876 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701832 kB' 'KernelStack: 27168 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 7460360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235400 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.626 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.627 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109374976 kB' 'MemAvailable: 112565068 kB' 'Buffers: 4132 kB' 'Cached: 9371020 kB' 'SwapCached: 0 kB' 'Active: 6375168 kB' 'Inactive: 3495736 kB' 'Active(anon): 5986504 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499064 kB' 'Mapped: 213848 kB' 'Shmem: 5490752 kB' 'KReclaimable: 260044 kB' 'Slab: 961904 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701860 kB' 'KernelStack: 27104 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 7461624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235368 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.628 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.629 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109375552 kB' 'MemAvailable: 112565644 kB' 'Buffers: 4132 kB' 'Cached: 9371036 kB' 'SwapCached: 0 kB' 'Active: 6375380 kB' 'Inactive: 3495736 kB' 'Active(anon): 5986716 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499292 kB' 'Mapped: 213848 kB' 'Shmem: 5490768 kB' 'KReclaimable: 260044 kB' 'Slab: 961904 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701860 kB' 'KernelStack: 27072 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 7462148 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235368 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.630 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.631 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:32.632 nr_hugepages=1536 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:32.632 resv_hugepages=0 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:32.632 surplus_hugepages=0 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:32.632 anon_hugepages=0 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 109375824 kB' 'MemAvailable: 112565916 kB' 'Buffers: 4132 kB' 'Cached: 9371060 kB' 'SwapCached: 0 kB' 'Active: 6374992 kB' 'Inactive: 3495736 kB' 'Active(anon): 5986328 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498896 kB' 'Mapped: 213848 kB' 'Shmem: 5490792 kB' 'KReclaimable: 260044 kB' 'Slab: 961904 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701860 kB' 'KernelStack: 27152 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 7461804 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235320 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.632 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.633 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61145176 kB' 'MemUsed: 4513832 kB' 'SwapCached: 0 kB' 'Active: 1218944 kB' 'Inactive: 204328 kB' 'Active(anon): 1050888 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1219104 kB' 'Mapped: 90496 kB' 'AnonPages: 207388 kB' 'Shmem: 846720 kB' 'KernelStack: 14680 kB' 'PageTables: 3304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157800 kB' 'Slab: 500044 kB' 'SReclaimable: 157800 kB' 'SUnreclaim: 342244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.634 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679844 kB' 'MemFree: 48230144 kB' 'MemUsed: 12449700 kB' 'SwapCached: 0 kB' 'Active: 5156156 kB' 'Inactive: 3291408 kB' 'Active(anon): 4935548 kB' 'Inactive(anon): 0 kB' 'Active(file): 220608 kB' 'Inactive(file): 3291408 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8156108 kB' 'Mapped: 123352 kB' 'AnonPages: 291608 kB' 'Shmem: 4644092 kB' 'KernelStack: 12424 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102244 kB' 'Slab: 461860 kB' 'SReclaimable: 102244 kB' 'SUnreclaim: 359616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.635 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:32.636 13:33:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:32.637 node0=512 expecting 512 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:32.637 node1=1024 expecting 1024 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:32.637 00:05:32.637 real 0m3.622s 00:05:32.637 user 0m1.484s 00:05:32.637 sys 0m2.194s 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:32.637 13:33:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:32.637 ************************************ 00:05:32.637 END TEST custom_alloc 00:05:32.637 ************************************ 00:05:32.898 13:33:25 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:32.898 13:33:25 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:32.898 13:33:25 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:32.898 13:33:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:32.898 ************************************ 00:05:32.898 START TEST no_shrink_alloc 00:05:32.898 ************************************ 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.898 13:33:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:36.200 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:36.200 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110447968 kB' 'MemAvailable: 113638060 kB' 'Buffers: 4132 kB' 'Cached: 9371192 kB' 'SwapCached: 0 kB' 'Active: 6376092 kB' 'Inactive: 3495736 kB' 'Active(anon): 5987428 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500316 kB' 'Mapped: 213952 kB' 'Shmem: 5490924 kB' 'KReclaimable: 260044 kB' 'Slab: 961672 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701628 kB' 'KernelStack: 27072 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7464232 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235416 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.200 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.201 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110448072 kB' 'MemAvailable: 113638164 kB' 'Buffers: 4132 kB' 'Cached: 9371196 kB' 'SwapCached: 0 kB' 'Active: 6377468 kB' 'Inactive: 3495736 kB' 'Active(anon): 5988804 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501336 kB' 'Mapped: 213952 kB' 'Shmem: 5490928 kB' 'KReclaimable: 260044 kB' 'Slab: 961640 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701596 kB' 'KernelStack: 27280 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7464396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235464 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.202 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:36.203 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110448952 kB' 'MemAvailable: 113639044 kB' 'Buffers: 4132 kB' 'Cached: 9371200 kB' 'SwapCached: 0 kB' 'Active: 6376880 kB' 'Inactive: 3495736 kB' 'Active(anon): 5988216 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500664 kB' 'Mapped: 213824 kB' 'Shmem: 5490932 kB' 'KReclaimable: 260044 kB' 'Slab: 961632 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701588 kB' 'KernelStack: 27216 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7464420 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235464 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.204 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.205 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:36.206 nr_hugepages=1024 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:36.206 resv_hugepages=0 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:36.206 surplus_hugepages=0 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:36.206 anon_hugepages=0 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110449244 kB' 'MemAvailable: 113639336 kB' 'Buffers: 4132 kB' 'Cached: 9371236 kB' 'SwapCached: 0 kB' 'Active: 6376312 kB' 'Inactive: 3495736 kB' 'Active(anon): 5987648 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499988 kB' 'Mapped: 213824 kB' 'Shmem: 5490968 kB' 'KReclaimable: 260044 kB' 'Slab: 961624 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701580 kB' 'KernelStack: 27168 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7462860 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235480 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.206 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.207 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.207 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.207 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.470 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60118020 kB' 'MemUsed: 5540988 kB' 'SwapCached: 0 kB' 'Active: 1219364 kB' 'Inactive: 204328 kB' 'Active(anon): 1051308 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1219224 kB' 'Mapped: 90496 kB' 'AnonPages: 207672 kB' 'Shmem: 846840 kB' 'KernelStack: 14888 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157800 kB' 'Slab: 499964 kB' 'SReclaimable: 157800 kB' 'SUnreclaim: 342164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.471 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:36.472 node0=1024 expecting 1024 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.472 13:33:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:39.775 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:39.775 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:39.775 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110439256 kB' 'MemAvailable: 113629348 kB' 'Buffers: 4132 kB' 'Cached: 9371352 kB' 'SwapCached: 0 kB' 'Active: 6379196 kB' 'Inactive: 3495736 kB' 'Active(anon): 5990532 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502764 kB' 'Mapped: 213908 kB' 'Shmem: 5491084 kB' 'KReclaimable: 260044 kB' 'Slab: 961848 kB' 'SReclaimable: 260044 kB' 'SUnreclaim: 701804 kB' 'KernelStack: 27408 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7465568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235528 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.775 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.776 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110440772 kB' 'MemAvailable: 113630848 kB' 'Buffers: 4132 kB' 'Cached: 9371352 kB' 'SwapCached: 0 kB' 'Active: 6380144 kB' 'Inactive: 3495736 kB' 'Active(anon): 5991480 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504700 kB' 'Mapped: 213984 kB' 'Shmem: 5491084 kB' 'KReclaimable: 260012 kB' 'Slab: 961848 kB' 'SReclaimable: 260012 kB' 'SUnreclaim: 701836 kB' 'KernelStack: 27312 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7484520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235528 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.777 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.778 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110443140 kB' 'MemAvailable: 113633216 kB' 'Buffers: 4132 kB' 'Cached: 9371372 kB' 'SwapCached: 0 kB' 'Active: 6379392 kB' 'Inactive: 3495736 kB' 'Active(anon): 5990728 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502924 kB' 'Mapped: 213852 kB' 'Shmem: 5491104 kB' 'KReclaimable: 260012 kB' 'Slab: 961836 kB' 'SReclaimable: 260012 kB' 'SUnreclaim: 701824 kB' 'KernelStack: 27328 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7465240 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235416 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.779 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:39.780 nr_hugepages=1024 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.780 resv_hugepages=0 00:05:39.780 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.780 surplus_hugepages=0 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.781 anon_hugepages=0 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338852 kB' 'MemFree: 110442676 kB' 'MemAvailable: 113632752 kB' 'Buffers: 4132 kB' 'Cached: 9371392 kB' 'SwapCached: 0 kB' 'Active: 6378368 kB' 'Inactive: 3495736 kB' 'Active(anon): 5989704 kB' 'Inactive(anon): 0 kB' 'Active(file): 388664 kB' 'Inactive(file): 3495736 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501824 kB' 'Mapped: 213852 kB' 'Shmem: 5491124 kB' 'KReclaimable: 260012 kB' 'Slab: 961836 kB' 'SReclaimable: 260012 kB' 'SUnreclaim: 701824 kB' 'KernelStack: 27296 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 7463536 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235464 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3110260 kB' 'DirectMap2M: 22784000 kB' 'DirectMap1G: 110100480 kB' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.781 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.782 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60117360 kB' 'MemUsed: 5541648 kB' 'SwapCached: 0 kB' 'Active: 1220308 kB' 'Inactive: 204328 kB' 'Active(anon): 1052252 kB' 'Inactive(anon): 0 kB' 'Active(file): 168056 kB' 'Inactive(file): 204328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1219344 kB' 'Mapped: 90496 kB' 'AnonPages: 208488 kB' 'Shmem: 846960 kB' 'KernelStack: 14696 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157768 kB' 'Slab: 499908 kB' 'SReclaimable: 157768 kB' 'SUnreclaim: 342140 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.045 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:40.046 node0=1024 expecting 1024 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:40.046 00:05:40.046 real 0m7.115s 00:05:40.046 user 0m2.747s 00:05:40.046 sys 0m4.485s 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:40.046 13:33:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:40.046 ************************************ 00:05:40.046 END TEST no_shrink_alloc 00:05:40.046 ************************************ 00:05:40.046 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:40.046 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:40.047 13:33:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:40.047 00:05:40.047 real 0m26.026s 00:05:40.047 user 0m10.293s 00:05:40.047 sys 0m16.126s 00:05:40.047 13:33:32 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:40.047 13:33:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:40.047 ************************************ 00:05:40.047 END TEST hugepages 00:05:40.047 ************************************ 00:05:40.047 13:33:32 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:40.047 13:33:32 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:40.047 13:33:32 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:40.047 13:33:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:40.047 ************************************ 00:05:40.047 START TEST driver 00:05:40.047 ************************************ 00:05:40.047 13:33:32 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:40.047 * Looking for test storage... 00:05:40.047 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:40.047 13:33:32 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:40.047 13:33:32 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:40.047 13:33:32 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:45.373 13:33:37 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:45.373 13:33:37 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:45.373 13:33:37 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:45.373 13:33:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:45.373 ************************************ 00:05:45.373 START TEST guess_driver 00:05:45.373 ************************************ 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:45.373 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:45.373 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:45.373 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:45.373 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:45.373 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:45.373 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:45.373 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:45.373 Looking for driver=vfio-pci 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.373 13:33:37 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:47.924 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:48.186 13:33:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.186 13:33:41 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:48.186 13:33:41 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:48.186 13:33:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:48.186 13:33:41 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:53.475 00:05:53.475 real 0m8.221s 00:05:53.475 user 0m2.711s 00:05:53.475 sys 0m4.747s 00:05:53.475 13:33:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:53.475 13:33:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:53.475 ************************************ 00:05:53.475 END TEST guess_driver 00:05:53.475 ************************************ 00:05:53.475 00:05:53.475 real 0m12.905s 00:05:53.475 user 0m4.051s 00:05:53.475 sys 0m7.310s 00:05:53.475 13:33:45 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:53.475 13:33:45 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:53.475 ************************************ 00:05:53.475 END TEST driver 00:05:53.475 ************************************ 00:05:53.475 13:33:45 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:53.475 13:33:45 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:53.475 13:33:45 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:53.475 13:33:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:53.475 ************************************ 00:05:53.475 START TEST devices 00:05:53.475 ************************************ 00:05:53.475 13:33:45 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:53.475 * Looking for test storage... 00:05:53.475 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:53.475 13:33:45 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:53.475 13:33:45 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:53.475 13:33:45 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:53.475 13:33:45 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:56.774 13:33:49 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:05:56.774 13:33:49 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:05:56.774 13:33:49 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:05:56.774 13:33:49 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:05:56.774 13:33:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:05:56.774 13:33:49 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:05:56.774 13:33:49 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:56.774 13:33:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:56.774 13:33:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:56.774 13:33:49 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:56.774 No valid GPT data, bailing 00:05:56.774 13:33:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:56.774 13:33:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:56.774 13:33:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:56.774 13:33:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:56.774 13:33:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:56.774 13:33:49 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:56.774 13:33:49 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:56.774 13:33:49 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.774 13:33:49 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.774 13:33:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:57.035 ************************************ 00:05:57.035 START TEST nvme_mount 00:05:57.035 ************************************ 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:57.035 13:33:49 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:57.975 Creating new GPT entries in memory. 00:05:57.975 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:57.975 other utilities. 00:05:57.975 13:33:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:57.975 13:33:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:57.975 13:33:50 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:57.975 13:33:50 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:57.975 13:33:50 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:58.919 Creating new GPT entries in memory. 00:05:58.919 The operation has completed successfully. 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1878490 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.919 13:33:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:02.220 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:02.481 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:02.481 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:02.741 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:02.741 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:06:02.741 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:02.741 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:02.741 13:33:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.043 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.044 13:33:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:09.346 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:09.607 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:09.607 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:09.607 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:09.607 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:09.607 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:09.607 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:09.607 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:09.607 13:34:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:09.607 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:09.607 00:06:09.607 real 0m12.693s 00:06:09.607 user 0m3.922s 00:06:09.607 sys 0m6.663s 00:06:09.607 13:34:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.607 13:34:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:09.607 ************************************ 00:06:09.607 END TEST nvme_mount 00:06:09.607 ************************************ 00:06:09.608 13:34:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:09.608 13:34:02 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:09.608 13:34:02 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.608 13:34:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:09.608 ************************************ 00:06:09.608 START TEST dm_mount 00:06:09.608 ************************************ 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:09.608 13:34:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:10.996 Creating new GPT entries in memory. 00:06:10.996 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:10.996 other utilities. 00:06:10.996 13:34:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:10.996 13:34:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:10.996 13:34:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:10.996 13:34:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:10.996 13:34:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:11.938 Creating new GPT entries in memory. 00:06:11.938 The operation has completed successfully. 00:06:11.938 13:34:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:11.938 13:34:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:11.938 13:34:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:11.938 13:34:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:11.938 13:34:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:12.882 The operation has completed successfully. 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1883680 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:12.882 13:34:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.187 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:16.188 13:34:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:06:19.490 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.491 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:19.491 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:19.491 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:19.491 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:19.491 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:19.491 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:19.491 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:19.751 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.751 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:19.751 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:19.751 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:19.751 13:34:12 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:19.751 00:06:19.751 real 0m9.965s 00:06:19.751 user 0m2.541s 00:06:19.751 sys 0m4.503s 00:06:19.751 13:34:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:19.751 13:34:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:19.751 ************************************ 00:06:19.751 END TEST dm_mount 00:06:19.751 ************************************ 00:06:19.751 13:34:12 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:19.751 13:34:12 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:19.751 13:34:12 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.751 13:34:12 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.751 13:34:12 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:19.751 13:34:12 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:19.751 13:34:12 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:20.028 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:20.028 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:06:20.028 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:20.028 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:20.028 13:34:12 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:20.028 13:34:12 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:06:20.028 13:34:12 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:20.028 13:34:12 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:20.028 13:34:12 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:20.028 13:34:12 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:20.028 13:34:12 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:20.028 00:06:20.028 real 0m26.936s 00:06:20.028 user 0m7.970s 00:06:20.028 sys 0m13.806s 00:06:20.028 13:34:12 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.028 13:34:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:20.028 ************************************ 00:06:20.028 END TEST devices 00:06:20.028 ************************************ 00:06:20.028 00:06:20.028 real 1m30.558s 00:06:20.028 user 0m30.274s 00:06:20.028 sys 0m51.770s 00:06:20.028 13:34:12 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.028 13:34:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:20.028 ************************************ 00:06:20.028 END TEST setup.sh 00:06:20.028 ************************************ 00:06:20.028 13:34:12 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:06:23.346 Hugepages 00:06:23.346 node hugesize free / total 00:06:23.346 node0 1048576kB 0 / 0 00:06:23.346 node0 2048kB 2048 / 2048 00:06:23.346 node1 1048576kB 0 / 0 00:06:23.346 node1 2048kB 0 / 0 00:06:23.346 00:06:23.346 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:23.346 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:23.346 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:23.346 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:23.346 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:23.346 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:23.346 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:23.346 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:23.346 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:23.607 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:23.607 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:23.607 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:23.607 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:23.607 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:23.607 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:23.607 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:23.607 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:23.607 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:23.607 13:34:16 -- spdk/autotest.sh@130 -- # uname -s 00:06:23.607 13:34:16 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:23.607 13:34:16 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:23.607 13:34:16 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:26.908 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:26.908 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:28.818 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:28.818 13:34:21 -- common/autotest_common.sh@1531 -- # sleep 1 00:06:29.759 13:34:22 -- common/autotest_common.sh@1532 -- # bdfs=() 00:06:29.759 13:34:22 -- common/autotest_common.sh@1532 -- # local bdfs 00:06:29.759 13:34:22 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:06:29.759 13:34:22 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:06:29.759 13:34:22 -- common/autotest_common.sh@1512 -- # bdfs=() 00:06:29.759 13:34:22 -- common/autotest_common.sh@1512 -- # local bdfs 00:06:29.759 13:34:22 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:29.759 13:34:22 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:29.759 13:34:22 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:06:30.019 13:34:22 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:06:30.019 13:34:22 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:06:30.019 13:34:22 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:06:33.319 Waiting for block devices as requested 00:06:33.319 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:33.319 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:33.319 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:33.580 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:33.580 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:33.580 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:33.840 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:33.840 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:33.840 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:34.100 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:34.100 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:34.100 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:34.361 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:34.361 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:34.361 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:34.361 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:34.622 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:34.622 13:34:27 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:06:34.622 13:34:27 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:34.622 13:34:27 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:06:34.622 13:34:27 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:06:34.622 13:34:27 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:34.622 13:34:27 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:34.622 13:34:27 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:34.622 13:34:27 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:06:34.622 13:34:27 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:06:34.622 13:34:27 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:06:34.622 13:34:27 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:06:34.622 13:34:27 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:06:34.622 13:34:27 -- common/autotest_common.sh@1544 -- # grep oacs 00:06:34.622 13:34:27 -- common/autotest_common.sh@1544 -- # oacs=' 0x5f' 00:06:34.623 13:34:27 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:06:34.623 13:34:27 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:06:34.623 13:34:27 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:06:34.623 13:34:27 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:06:34.623 13:34:27 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:06:34.623 13:34:27 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:06:34.623 13:34:27 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:06:34.623 13:34:27 -- common/autotest_common.sh@1556 -- # continue 00:06:34.623 13:34:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:34.623 13:34:27 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:34.623 13:34:27 -- common/autotest_common.sh@10 -- # set +x 00:06:34.623 13:34:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:34.623 13:34:27 -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:34.623 13:34:27 -- common/autotest_common.sh@10 -- # set +x 00:06:34.623 13:34:27 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:37.926 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:37.926 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:38.187 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:38.187 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:38.187 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:38.187 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:38.187 13:34:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:38.187 13:34:31 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:38.187 13:34:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.187 13:34:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:38.187 13:34:31 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:06:38.187 13:34:31 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:06:38.187 13:34:31 -- common/autotest_common.sh@1576 -- # bdfs=() 00:06:38.187 13:34:31 -- common/autotest_common.sh@1576 -- # local bdfs 00:06:38.187 13:34:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:06:38.187 13:34:31 -- common/autotest_common.sh@1512 -- # bdfs=() 00:06:38.187 13:34:31 -- common/autotest_common.sh@1512 -- # local bdfs 00:06:38.187 13:34:31 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:38.187 13:34:31 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:06:38.187 13:34:31 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:38.448 13:34:31 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:06:38.448 13:34:31 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:06:38.448 13:34:31 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:06:38.448 13:34:31 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:38.448 13:34:31 -- common/autotest_common.sh@1579 -- # device=0xa80a 00:06:38.448 13:34:31 -- common/autotest_common.sh@1580 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:38.448 13:34:31 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:06:38.448 13:34:31 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:06:38.448 13:34:31 -- common/autotest_common.sh@1592 -- # return 0 00:06:38.448 13:34:31 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:38.448 13:34:31 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:38.448 13:34:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:38.448 13:34:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:38.448 13:34:31 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:38.448 13:34:31 -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:38.448 13:34:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.448 13:34:31 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:38.448 13:34:31 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:38.448 13:34:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:38.448 13:34:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.448 13:34:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.448 ************************************ 00:06:38.448 START TEST env 00:06:38.448 ************************************ 00:06:38.448 13:34:31 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:38.448 * Looking for test storage... 00:06:38.448 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:06:38.448 13:34:31 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:38.448 13:34:31 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:38.448 13:34:31 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.448 13:34:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.448 ************************************ 00:06:38.448 START TEST env_memory 00:06:38.448 ************************************ 00:06:38.448 13:34:31 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:38.448 00:06:38.448 00:06:38.448 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.448 http://cunit.sourceforge.net/ 00:06:38.448 00:06:38.448 00:06:38.448 Suite: memory 00:06:38.708 Test: alloc and free memory map ...[2024-06-11 13:34:31.380199] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:38.708 passed 00:06:38.708 Test: mem map translation ...[2024-06-11 13:34:31.405812] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:38.708 [2024-06-11 13:34:31.405842] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:38.709 [2024-06-11 13:34:31.405888] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:38.709 [2024-06-11 13:34:31.405902] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:38.709 passed 00:06:38.709 Test: mem map registration ...[2024-06-11 13:34:31.461084] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:38.709 [2024-06-11 13:34:31.461106] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:38.709 passed 00:06:38.709 Test: mem map adjacent registrations ...passed 00:06:38.709 00:06:38.709 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.709 suites 1 1 n/a 0 0 00:06:38.709 tests 4 4 4 0 0 00:06:38.709 asserts 152 152 152 0 n/a 00:06:38.709 00:06:38.709 Elapsed time = 0.193 seconds 00:06:38.709 00:06:38.709 real 0m0.207s 00:06:38.709 user 0m0.191s 00:06:38.709 sys 0m0.015s 00:06:38.709 13:34:31 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:38.709 13:34:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:38.709 ************************************ 00:06:38.709 END TEST env_memory 00:06:38.709 ************************************ 00:06:38.709 13:34:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:38.709 13:34:31 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:38.709 13:34:31 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:38.709 13:34:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.709 ************************************ 00:06:38.709 START TEST env_vtophys 00:06:38.709 ************************************ 00:06:38.709 13:34:31 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:38.970 EAL: lib.eal log level changed from notice to debug 00:06:38.970 EAL: Detected lcore 0 as core 0 on socket 0 00:06:38.970 EAL: Detected lcore 1 as core 1 on socket 0 00:06:38.970 EAL: Detected lcore 2 as core 2 on socket 0 00:06:38.970 EAL: Detected lcore 3 as core 3 on socket 0 00:06:38.970 EAL: Detected lcore 4 as core 4 on socket 0 00:06:38.970 EAL: Detected lcore 5 as core 5 on socket 0 00:06:38.970 EAL: Detected lcore 6 as core 6 on socket 0 00:06:38.970 EAL: Detected lcore 7 as core 7 on socket 0 00:06:38.970 EAL: Detected lcore 8 as core 8 on socket 0 00:06:38.970 EAL: Detected lcore 9 as core 9 on socket 0 00:06:38.970 EAL: Detected lcore 10 as core 10 on socket 0 00:06:38.970 EAL: Detected lcore 11 as core 11 on socket 0 00:06:38.970 EAL: Detected lcore 12 as core 12 on socket 0 00:06:38.970 EAL: Detected lcore 13 as core 13 on socket 0 00:06:38.970 EAL: Detected lcore 14 as core 14 on socket 0 00:06:38.970 EAL: Detected lcore 15 as core 15 on socket 0 00:06:38.970 EAL: Detected lcore 16 as core 16 on socket 0 00:06:38.970 EAL: Detected lcore 17 as core 17 on socket 0 00:06:38.970 EAL: Detected lcore 18 as core 18 on socket 0 00:06:38.970 EAL: Detected lcore 19 as core 19 on socket 0 00:06:38.970 EAL: Detected lcore 20 as core 20 on socket 0 00:06:38.970 EAL: Detected lcore 21 as core 21 on socket 0 00:06:38.970 EAL: Detected lcore 22 as core 22 on socket 0 00:06:38.970 EAL: Detected lcore 23 as core 23 on socket 0 00:06:38.970 EAL: Detected lcore 24 as core 24 on socket 0 00:06:38.970 EAL: Detected lcore 25 as core 25 on socket 0 00:06:38.970 EAL: Detected lcore 26 as core 26 on socket 0 00:06:38.970 EAL: Detected lcore 27 as core 27 on socket 0 00:06:38.970 EAL: Detected lcore 28 as core 28 on socket 0 00:06:38.970 EAL: Detected lcore 29 as core 29 on socket 0 00:06:38.970 EAL: Detected lcore 30 as core 30 on socket 0 00:06:38.970 EAL: Detected lcore 31 as core 31 on socket 0 00:06:38.970 EAL: Detected lcore 32 as core 32 on socket 0 00:06:38.970 EAL: Detected lcore 33 as core 33 on socket 0 00:06:38.970 EAL: Detected lcore 34 as core 34 on socket 0 00:06:38.970 EAL: Detected lcore 35 as core 35 on socket 0 00:06:38.970 EAL: Detected lcore 36 as core 0 on socket 1 00:06:38.970 EAL: Detected lcore 37 as core 1 on socket 1 00:06:38.970 EAL: Detected lcore 38 as core 2 on socket 1 00:06:38.970 EAL: Detected lcore 39 as core 3 on socket 1 00:06:38.970 EAL: Detected lcore 40 as core 4 on socket 1 00:06:38.970 EAL: Detected lcore 41 as core 5 on socket 1 00:06:38.970 EAL: Detected lcore 42 as core 6 on socket 1 00:06:38.970 EAL: Detected lcore 43 as core 7 on socket 1 00:06:38.970 EAL: Detected lcore 44 as core 8 on socket 1 00:06:38.970 EAL: Detected lcore 45 as core 9 on socket 1 00:06:38.970 EAL: Detected lcore 46 as core 10 on socket 1 00:06:38.970 EAL: Detected lcore 47 as core 11 on socket 1 00:06:38.970 EAL: Detected lcore 48 as core 12 on socket 1 00:06:38.970 EAL: Detected lcore 49 as core 13 on socket 1 00:06:38.970 EAL: Detected lcore 50 as core 14 on socket 1 00:06:38.970 EAL: Detected lcore 51 as core 15 on socket 1 00:06:38.970 EAL: Detected lcore 52 as core 16 on socket 1 00:06:38.970 EAL: Detected lcore 53 as core 17 on socket 1 00:06:38.970 EAL: Detected lcore 54 as core 18 on socket 1 00:06:38.970 EAL: Detected lcore 55 as core 19 on socket 1 00:06:38.970 EAL: Detected lcore 56 as core 20 on socket 1 00:06:38.970 EAL: Detected lcore 57 as core 21 on socket 1 00:06:38.970 EAL: Detected lcore 58 as core 22 on socket 1 00:06:38.970 EAL: Detected lcore 59 as core 23 on socket 1 00:06:38.970 EAL: Detected lcore 60 as core 24 on socket 1 00:06:38.970 EAL: Detected lcore 61 as core 25 on socket 1 00:06:38.970 EAL: Detected lcore 62 as core 26 on socket 1 00:06:38.970 EAL: Detected lcore 63 as core 27 on socket 1 00:06:38.970 EAL: Detected lcore 64 as core 28 on socket 1 00:06:38.970 EAL: Detected lcore 65 as core 29 on socket 1 00:06:38.970 EAL: Detected lcore 66 as core 30 on socket 1 00:06:38.970 EAL: Detected lcore 67 as core 31 on socket 1 00:06:38.970 EAL: Detected lcore 68 as core 32 on socket 1 00:06:38.970 EAL: Detected lcore 69 as core 33 on socket 1 00:06:38.970 EAL: Detected lcore 70 as core 34 on socket 1 00:06:38.970 EAL: Detected lcore 71 as core 35 on socket 1 00:06:38.970 EAL: Detected lcore 72 as core 0 on socket 0 00:06:38.970 EAL: Detected lcore 73 as core 1 on socket 0 00:06:38.970 EAL: Detected lcore 74 as core 2 on socket 0 00:06:38.970 EAL: Detected lcore 75 as core 3 on socket 0 00:06:38.970 EAL: Detected lcore 76 as core 4 on socket 0 00:06:38.970 EAL: Detected lcore 77 as core 5 on socket 0 00:06:38.970 EAL: Detected lcore 78 as core 6 on socket 0 00:06:38.970 EAL: Detected lcore 79 as core 7 on socket 0 00:06:38.970 EAL: Detected lcore 80 as core 8 on socket 0 00:06:38.970 EAL: Detected lcore 81 as core 9 on socket 0 00:06:38.970 EAL: Detected lcore 82 as core 10 on socket 0 00:06:38.970 EAL: Detected lcore 83 as core 11 on socket 0 00:06:38.970 EAL: Detected lcore 84 as core 12 on socket 0 00:06:38.970 EAL: Detected lcore 85 as core 13 on socket 0 00:06:38.970 EAL: Detected lcore 86 as core 14 on socket 0 00:06:38.970 EAL: Detected lcore 87 as core 15 on socket 0 00:06:38.970 EAL: Detected lcore 88 as core 16 on socket 0 00:06:38.971 EAL: Detected lcore 89 as core 17 on socket 0 00:06:38.971 EAL: Detected lcore 90 as core 18 on socket 0 00:06:38.971 EAL: Detected lcore 91 as core 19 on socket 0 00:06:38.971 EAL: Detected lcore 92 as core 20 on socket 0 00:06:38.971 EAL: Detected lcore 93 as core 21 on socket 0 00:06:38.971 EAL: Detected lcore 94 as core 22 on socket 0 00:06:38.971 EAL: Detected lcore 95 as core 23 on socket 0 00:06:38.971 EAL: Detected lcore 96 as core 24 on socket 0 00:06:38.971 EAL: Detected lcore 97 as core 25 on socket 0 00:06:38.971 EAL: Detected lcore 98 as core 26 on socket 0 00:06:38.971 EAL: Detected lcore 99 as core 27 on socket 0 00:06:38.971 EAL: Detected lcore 100 as core 28 on socket 0 00:06:38.971 EAL: Detected lcore 101 as core 29 on socket 0 00:06:38.971 EAL: Detected lcore 102 as core 30 on socket 0 00:06:38.971 EAL: Detected lcore 103 as core 31 on socket 0 00:06:38.971 EAL: Detected lcore 104 as core 32 on socket 0 00:06:38.971 EAL: Detected lcore 105 as core 33 on socket 0 00:06:38.971 EAL: Detected lcore 106 as core 34 on socket 0 00:06:38.971 EAL: Detected lcore 107 as core 35 on socket 0 00:06:38.971 EAL: Detected lcore 108 as core 0 on socket 1 00:06:38.971 EAL: Detected lcore 109 as core 1 on socket 1 00:06:38.971 EAL: Detected lcore 110 as core 2 on socket 1 00:06:38.971 EAL: Detected lcore 111 as core 3 on socket 1 00:06:38.971 EAL: Detected lcore 112 as core 4 on socket 1 00:06:38.971 EAL: Detected lcore 113 as core 5 on socket 1 00:06:38.971 EAL: Detected lcore 114 as core 6 on socket 1 00:06:38.971 EAL: Detected lcore 115 as core 7 on socket 1 00:06:38.971 EAL: Detected lcore 116 as core 8 on socket 1 00:06:38.971 EAL: Detected lcore 117 as core 9 on socket 1 00:06:38.971 EAL: Detected lcore 118 as core 10 on socket 1 00:06:38.971 EAL: Detected lcore 119 as core 11 on socket 1 00:06:38.971 EAL: Detected lcore 120 as core 12 on socket 1 00:06:38.971 EAL: Detected lcore 121 as core 13 on socket 1 00:06:38.971 EAL: Detected lcore 122 as core 14 on socket 1 00:06:38.971 EAL: Detected lcore 123 as core 15 on socket 1 00:06:38.971 EAL: Detected lcore 124 as core 16 on socket 1 00:06:38.971 EAL: Detected lcore 125 as core 17 on socket 1 00:06:38.971 EAL: Detected lcore 126 as core 18 on socket 1 00:06:38.971 EAL: Detected lcore 127 as core 19 on socket 1 00:06:38.971 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:38.971 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:38.971 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:38.971 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:38.971 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:38.971 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:38.971 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:38.971 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:38.971 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:38.971 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:38.971 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:38.971 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:38.971 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:38.971 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:38.971 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:38.971 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:38.971 EAL: Maximum logical cores by configuration: 128 00:06:38.971 EAL: Detected CPU lcores: 128 00:06:38.971 EAL: Detected NUMA nodes: 2 00:06:38.971 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:38.971 EAL: Detected shared linkage of DPDK 00:06:38.971 EAL: No shared files mode enabled, IPC will be disabled 00:06:38.971 EAL: Bus pci wants IOVA as 'DC' 00:06:38.971 EAL: Buses did not request a specific IOVA mode. 00:06:38.971 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:38.971 EAL: Selected IOVA mode 'VA' 00:06:38.971 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.971 EAL: Probing VFIO support... 00:06:38.971 EAL: IOMMU type 1 (Type 1) is supported 00:06:38.971 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:38.971 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:38.971 EAL: VFIO support initialized 00:06:38.971 EAL: Ask a virtual area of 0x2e000 bytes 00:06:38.971 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:38.971 EAL: Setting up physically contiguous memory... 00:06:38.971 EAL: Setting maximum number of open files to 524288 00:06:38.971 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:38.971 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:38.971 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:38.971 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.971 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:38.971 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.971 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.971 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:38.971 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:38.971 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.971 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:38.971 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.971 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.971 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:38.971 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:38.971 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.971 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:38.971 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.971 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.971 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:38.971 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:38.971 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.971 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:38.971 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:38.971 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.971 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:38.971 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:38.971 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:38.971 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.971 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:38.971 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:38.971 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.971 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:38.971 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:38.971 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.971 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:38.971 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:38.971 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.971 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:38.971 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:38.971 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.971 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:38.971 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:38.971 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.971 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:38.971 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:38.971 EAL: Ask a virtual area of 0x61000 bytes 00:06:38.971 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:38.971 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:38.971 EAL: Ask a virtual area of 0x400000000 bytes 00:06:38.971 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:38.971 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:38.971 EAL: Hugepages will be freed exactly as allocated. 00:06:38.971 EAL: No shared files mode enabled, IPC is disabled 00:06:38.971 EAL: No shared files mode enabled, IPC is disabled 00:06:38.971 EAL: TSC frequency is ~2400000 KHz 00:06:38.971 EAL: Main lcore 0 is ready (tid=7f756a5e2a00;cpuset=[0]) 00:06:38.971 EAL: Trying to obtain current memory policy. 00:06:38.971 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.971 EAL: Restoring previous memory policy: 0 00:06:38.971 EAL: request: mp_malloc_sync 00:06:38.971 EAL: No shared files mode enabled, IPC is disabled 00:06:38.971 EAL: Heap on socket 0 was expanded by 2MB 00:06:38.971 EAL: No shared files mode enabled, IPC is disabled 00:06:38.971 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:38.971 EAL: Mem event callback 'spdk:(nil)' registered 00:06:38.971 00:06:38.971 00:06:38.971 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.971 http://cunit.sourceforge.net/ 00:06:38.971 00:06:38.971 00:06:38.971 Suite: components_suite 00:06:38.971 Test: vtophys_malloc_test ...passed 00:06:38.971 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:38.971 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.971 EAL: Restoring previous memory policy: 4 00:06:38.971 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.971 EAL: request: mp_malloc_sync 00:06:38.971 EAL: No shared files mode enabled, IPC is disabled 00:06:38.971 EAL: Heap on socket 0 was expanded by 4MB 00:06:38.971 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.971 EAL: request: mp_malloc_sync 00:06:38.971 EAL: No shared files mode enabled, IPC is disabled 00:06:38.971 EAL: Heap on socket 0 was shrunk by 4MB 00:06:38.971 EAL: Trying to obtain current memory policy. 00:06:38.971 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.971 EAL: Restoring previous memory policy: 4 00:06:38.971 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.971 EAL: request: mp_malloc_sync 00:06:38.971 EAL: No shared files mode enabled, IPC is disabled 00:06:38.971 EAL: Heap on socket 0 was expanded by 6MB 00:06:38.971 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.971 EAL: request: mp_malloc_sync 00:06:38.971 EAL: No shared files mode enabled, IPC is disabled 00:06:38.971 EAL: Heap on socket 0 was shrunk by 6MB 00:06:38.971 EAL: Trying to obtain current memory policy. 00:06:38.971 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.971 EAL: Restoring previous memory policy: 4 00:06:38.971 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.971 EAL: request: mp_malloc_sync 00:06:38.971 EAL: No shared files mode enabled, IPC is disabled 00:06:38.971 EAL: Heap on socket 0 was expanded by 10MB 00:06:38.971 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.971 EAL: request: mp_malloc_sync 00:06:38.971 EAL: No shared files mode enabled, IPC is disabled 00:06:38.971 EAL: Heap on socket 0 was shrunk by 10MB 00:06:38.972 EAL: Trying to obtain current memory policy. 00:06:38.972 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.972 EAL: Restoring previous memory policy: 4 00:06:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.972 EAL: request: mp_malloc_sync 00:06:38.972 EAL: No shared files mode enabled, IPC is disabled 00:06:38.972 EAL: Heap on socket 0 was expanded by 18MB 00:06:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.972 EAL: request: mp_malloc_sync 00:06:38.972 EAL: No shared files mode enabled, IPC is disabled 00:06:38.972 EAL: Heap on socket 0 was shrunk by 18MB 00:06:38.972 EAL: Trying to obtain current memory policy. 00:06:38.972 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.972 EAL: Restoring previous memory policy: 4 00:06:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.972 EAL: request: mp_malloc_sync 00:06:38.972 EAL: No shared files mode enabled, IPC is disabled 00:06:38.972 EAL: Heap on socket 0 was expanded by 34MB 00:06:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.972 EAL: request: mp_malloc_sync 00:06:38.972 EAL: No shared files mode enabled, IPC is disabled 00:06:38.972 EAL: Heap on socket 0 was shrunk by 34MB 00:06:38.972 EAL: Trying to obtain current memory policy. 00:06:38.972 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.972 EAL: Restoring previous memory policy: 4 00:06:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.972 EAL: request: mp_malloc_sync 00:06:38.972 EAL: No shared files mode enabled, IPC is disabled 00:06:38.972 EAL: Heap on socket 0 was expanded by 66MB 00:06:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.972 EAL: request: mp_malloc_sync 00:06:38.972 EAL: No shared files mode enabled, IPC is disabled 00:06:38.972 EAL: Heap on socket 0 was shrunk by 66MB 00:06:38.972 EAL: Trying to obtain current memory policy. 00:06:38.972 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.972 EAL: Restoring previous memory policy: 4 00:06:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.972 EAL: request: mp_malloc_sync 00:06:38.972 EAL: No shared files mode enabled, IPC is disabled 00:06:38.972 EAL: Heap on socket 0 was expanded by 130MB 00:06:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.972 EAL: request: mp_malloc_sync 00:06:38.972 EAL: No shared files mode enabled, IPC is disabled 00:06:38.972 EAL: Heap on socket 0 was shrunk by 130MB 00:06:38.972 EAL: Trying to obtain current memory policy. 00:06:38.972 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.972 EAL: Restoring previous memory policy: 4 00:06:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.972 EAL: request: mp_malloc_sync 00:06:38.972 EAL: No shared files mode enabled, IPC is disabled 00:06:38.972 EAL: Heap on socket 0 was expanded by 258MB 00:06:38.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.972 EAL: request: mp_malloc_sync 00:06:38.972 EAL: No shared files mode enabled, IPC is disabled 00:06:38.972 EAL: Heap on socket 0 was shrunk by 258MB 00:06:38.972 EAL: Trying to obtain current memory policy. 00:06:38.972 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.232 EAL: Restoring previous memory policy: 4 00:06:39.232 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.232 EAL: request: mp_malloc_sync 00:06:39.232 EAL: No shared files mode enabled, IPC is disabled 00:06:39.232 EAL: Heap on socket 0 was expanded by 514MB 00:06:39.232 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.232 EAL: request: mp_malloc_sync 00:06:39.232 EAL: No shared files mode enabled, IPC is disabled 00:06:39.232 EAL: Heap on socket 0 was shrunk by 514MB 00:06:39.232 EAL: Trying to obtain current memory policy. 00:06:39.232 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:39.492 EAL: Restoring previous memory policy: 4 00:06:39.492 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.492 EAL: request: mp_malloc_sync 00:06:39.492 EAL: No shared files mode enabled, IPC is disabled 00:06:39.492 EAL: Heap on socket 0 was expanded by 1026MB 00:06:39.492 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.492 EAL: request: mp_malloc_sync 00:06:39.492 EAL: No shared files mode enabled, IPC is disabled 00:06:39.492 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:39.492 passed 00:06:39.492 00:06:39.492 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.492 suites 1 1 n/a 0 0 00:06:39.492 tests 2 2 2 0 0 00:06:39.492 asserts 497 497 497 0 n/a 00:06:39.492 00:06:39.492 Elapsed time = 0.642 seconds 00:06:39.492 EAL: Calling mem event callback 'spdk:(nil)' 00:06:39.492 EAL: request: mp_malloc_sync 00:06:39.492 EAL: No shared files mode enabled, IPC is disabled 00:06:39.492 EAL: Heap on socket 0 was shrunk by 2MB 00:06:39.492 EAL: No shared files mode enabled, IPC is disabled 00:06:39.492 EAL: No shared files mode enabled, IPC is disabled 00:06:39.492 EAL: No shared files mode enabled, IPC is disabled 00:06:39.492 00:06:39.492 real 0m0.771s 00:06:39.492 user 0m0.404s 00:06:39.492 sys 0m0.341s 00:06:39.492 13:34:32 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.492 13:34:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:39.492 ************************************ 00:06:39.492 END TEST env_vtophys 00:06:39.492 ************************************ 00:06:39.752 13:34:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:39.752 13:34:32 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:39.752 13:34:32 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.752 13:34:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.752 ************************************ 00:06:39.752 START TEST env_pci 00:06:39.752 ************************************ 00:06:39.753 13:34:32 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:39.753 00:06:39.753 00:06:39.753 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.753 http://cunit.sourceforge.net/ 00:06:39.753 00:06:39.753 00:06:39.753 Suite: pci 00:06:39.753 Test: pci_hook ...[2024-06-11 13:34:32.473941] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1894854 has claimed it 00:06:39.753 EAL: Cannot find device (10000:00:01.0) 00:06:39.753 EAL: Failed to attach device on primary process 00:06:39.753 passed 00:06:39.753 00:06:39.753 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.753 suites 1 1 n/a 0 0 00:06:39.753 tests 1 1 1 0 0 00:06:39.753 asserts 25 25 25 0 n/a 00:06:39.753 00:06:39.753 Elapsed time = 0.037 seconds 00:06:39.753 00:06:39.753 real 0m0.057s 00:06:39.753 user 0m0.016s 00:06:39.753 sys 0m0.041s 00:06:39.753 13:34:32 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.753 13:34:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:39.753 ************************************ 00:06:39.753 END TEST env_pci 00:06:39.753 ************************************ 00:06:39.753 13:34:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:39.753 13:34:32 env -- env/env.sh@15 -- # uname 00:06:39.753 13:34:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:39.753 13:34:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:39.753 13:34:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:39.753 13:34:32 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:06:39.753 13:34:32 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.753 13:34:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.753 ************************************ 00:06:39.753 START TEST env_dpdk_post_init 00:06:39.753 ************************************ 00:06:39.753 13:34:32 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:39.753 EAL: Detected CPU lcores: 128 00:06:39.753 EAL: Detected NUMA nodes: 2 00:06:39.753 EAL: Detected shared linkage of DPDK 00:06:39.753 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:39.753 EAL: Selected IOVA mode 'VA' 00:06:39.753 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.753 EAL: VFIO support initialized 00:06:39.753 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:40.013 EAL: Using IOMMU type 1 (Type 1) 00:06:40.013 EAL: Ignore mapping IO port bar(1) 00:06:40.274 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:40.274 EAL: Ignore mapping IO port bar(1) 00:06:40.274 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:40.534 EAL: Ignore mapping IO port bar(1) 00:06:40.534 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:40.794 EAL: Ignore mapping IO port bar(1) 00:06:40.794 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:41.054 EAL: Ignore mapping IO port bar(1) 00:06:41.054 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:41.054 EAL: Ignore mapping IO port bar(1) 00:06:41.314 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:41.314 EAL: Ignore mapping IO port bar(1) 00:06:41.575 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:41.575 EAL: Ignore mapping IO port bar(1) 00:06:41.835 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:41.835 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:42.095 EAL: Ignore mapping IO port bar(1) 00:06:42.095 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:42.355 EAL: Ignore mapping IO port bar(1) 00:06:42.355 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:42.615 EAL: Ignore mapping IO port bar(1) 00:06:42.615 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:42.615 EAL: Ignore mapping IO port bar(1) 00:06:42.875 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:42.875 EAL: Ignore mapping IO port bar(1) 00:06:43.135 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:43.135 EAL: Ignore mapping IO port bar(1) 00:06:43.395 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:43.395 EAL: Ignore mapping IO port bar(1) 00:06:43.395 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:43.655 EAL: Ignore mapping IO port bar(1) 00:06:43.655 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:43.655 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:43.655 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:43.915 Starting DPDK initialization... 00:06:43.916 Starting SPDK post initialization... 00:06:43.916 SPDK NVMe probe 00:06:43.916 Attaching to 0000:65:00.0 00:06:43.916 Attached to 0000:65:00.0 00:06:43.916 Cleaning up... 00:06:45.828 00:06:45.828 real 0m5.716s 00:06:45.828 user 0m0.182s 00:06:45.828 sys 0m0.082s 00:06:45.828 13:34:38 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:45.828 13:34:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:45.828 ************************************ 00:06:45.828 END TEST env_dpdk_post_init 00:06:45.828 ************************************ 00:06:45.828 13:34:38 env -- env/env.sh@26 -- # uname 00:06:45.828 13:34:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:45.828 13:34:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.828 13:34:38 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:45.828 13:34:38 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:45.828 13:34:38 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.828 ************************************ 00:06:45.828 START TEST env_mem_callbacks 00:06:45.828 ************************************ 00:06:45.828 13:34:38 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.828 EAL: Detected CPU lcores: 128 00:06:45.828 EAL: Detected NUMA nodes: 2 00:06:45.828 EAL: Detected shared linkage of DPDK 00:06:45.828 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:45.828 EAL: Selected IOVA mode 'VA' 00:06:45.828 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.828 EAL: VFIO support initialized 00:06:45.828 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:45.828 00:06:45.828 00:06:45.828 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.828 http://cunit.sourceforge.net/ 00:06:45.828 00:06:45.828 00:06:45.828 Suite: memory 00:06:45.828 Test: test ... 00:06:45.828 register 0x200000200000 2097152 00:06:45.828 malloc 3145728 00:06:45.828 register 0x200000400000 4194304 00:06:45.828 buf 0x200000500000 len 3145728 PASSED 00:06:45.828 malloc 64 00:06:45.828 buf 0x2000004fff40 len 64 PASSED 00:06:45.828 malloc 4194304 00:06:45.828 register 0x200000800000 6291456 00:06:45.828 buf 0x200000a00000 len 4194304 PASSED 00:06:45.828 free 0x200000500000 3145728 00:06:45.828 free 0x2000004fff40 64 00:06:45.828 unregister 0x200000400000 4194304 PASSED 00:06:45.828 free 0x200000a00000 4194304 00:06:45.828 unregister 0x200000800000 6291456 PASSED 00:06:45.828 malloc 8388608 00:06:45.828 register 0x200000400000 10485760 00:06:45.828 buf 0x200000600000 len 8388608 PASSED 00:06:45.828 free 0x200000600000 8388608 00:06:45.828 unregister 0x200000400000 10485760 PASSED 00:06:45.828 passed 00:06:45.828 00:06:45.828 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.828 suites 1 1 n/a 0 0 00:06:45.828 tests 1 1 1 0 0 00:06:45.828 asserts 15 15 15 0 n/a 00:06:45.828 00:06:45.828 Elapsed time = 0.005 seconds 00:06:45.828 00:06:45.828 real 0m0.057s 00:06:45.828 user 0m0.021s 00:06:45.828 sys 0m0.035s 00:06:45.828 13:34:38 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:45.828 13:34:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:45.828 ************************************ 00:06:45.828 END TEST env_mem_callbacks 00:06:45.828 ************************************ 00:06:45.828 00:06:45.828 real 0m7.292s 00:06:45.828 user 0m0.995s 00:06:45.828 sys 0m0.846s 00:06:45.828 13:34:38 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:45.828 13:34:38 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.828 ************************************ 00:06:45.828 END TEST env 00:06:45.828 ************************************ 00:06:45.828 13:34:38 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:45.828 13:34:38 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:45.828 13:34:38 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:45.828 13:34:38 -- common/autotest_common.sh@10 -- # set +x 00:06:45.828 ************************************ 00:06:45.828 START TEST rpc 00:06:45.828 ************************************ 00:06:45.828 13:34:38 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:45.828 * Looking for test storage... 00:06:45.828 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:45.828 13:34:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1896017 00:06:45.828 13:34:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.828 13:34:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:45.828 13:34:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1896017 00:06:45.828 13:34:38 rpc -- common/autotest_common.sh@830 -- # '[' -z 1896017 ']' 00:06:45.828 13:34:38 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.828 13:34:38 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:45.828 13:34:38 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.828 13:34:38 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:45.828 13:34:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.829 [2024-06-11 13:34:38.719172] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:45.829 [2024-06-11 13:34:38.719229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896017 ] 00:06:46.089 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.089 [2024-06-11 13:34:38.782182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.089 [2024-06-11 13:34:38.850612] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:46.089 [2024-06-11 13:34:38.850651] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1896017' to capture a snapshot of events at runtime. 00:06:46.089 [2024-06-11 13:34:38.850659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.089 [2024-06-11 13:34:38.850665] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.089 [2024-06-11 13:34:38.850670] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1896017 for offline analysis/debug. 00:06:46.089 [2024-06-11 13:34:38.850689] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.662 13:34:39 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:46.662 13:34:39 rpc -- common/autotest_common.sh@863 -- # return 0 00:06:46.662 13:34:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:46.662 13:34:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:46.662 13:34:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:46.662 13:34:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:46.662 13:34:39 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:46.662 13:34:39 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.662 13:34:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.662 ************************************ 00:06:46.662 START TEST rpc_integrity 00:06:46.662 ************************************ 00:06:46.662 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:06:46.662 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:46.662 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.662 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.662 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.662 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:46.662 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:46.662 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:46.922 { 00:06:46.922 "name": "Malloc0", 00:06:46.922 "aliases": [ 00:06:46.922 "62be960f-ad38-4cf0-8700-42b3d1e470a7" 00:06:46.922 ], 00:06:46.922 "product_name": "Malloc disk", 00:06:46.922 "block_size": 512, 00:06:46.922 "num_blocks": 16384, 00:06:46.922 "uuid": "62be960f-ad38-4cf0-8700-42b3d1e470a7", 00:06:46.922 "assigned_rate_limits": { 00:06:46.922 "rw_ios_per_sec": 0, 00:06:46.922 "rw_mbytes_per_sec": 0, 00:06:46.922 "r_mbytes_per_sec": 0, 00:06:46.922 "w_mbytes_per_sec": 0 00:06:46.922 }, 00:06:46.922 "claimed": false, 00:06:46.922 "zoned": false, 00:06:46.922 "supported_io_types": { 00:06:46.922 "read": true, 00:06:46.922 "write": true, 00:06:46.922 "unmap": true, 00:06:46.922 "write_zeroes": true, 00:06:46.922 "flush": true, 00:06:46.922 "reset": true, 00:06:46.922 "compare": false, 00:06:46.922 "compare_and_write": false, 00:06:46.922 "abort": true, 00:06:46.922 "nvme_admin": false, 00:06:46.922 "nvme_io": false 00:06:46.922 }, 00:06:46.922 "memory_domains": [ 00:06:46.922 { 00:06:46.922 "dma_device_id": "system", 00:06:46.922 "dma_device_type": 1 00:06:46.922 }, 00:06:46.922 { 00:06:46.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.922 "dma_device_type": 2 00:06:46.922 } 00:06:46.922 ], 00:06:46.922 "driver_specific": {} 00:06:46.922 } 00:06:46.922 ]' 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.922 [2024-06-11 13:34:39.658271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:46.922 [2024-06-11 13:34:39.658305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:46.922 [2024-06-11 13:34:39.658317] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf5b010 00:06:46.922 [2024-06-11 13:34:39.658324] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:46.922 [2024-06-11 13:34:39.659663] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:46.922 [2024-06-11 13:34:39.659685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:46.922 Passthru0 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.922 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:46.922 { 00:06:46.922 "name": "Malloc0", 00:06:46.922 "aliases": [ 00:06:46.922 "62be960f-ad38-4cf0-8700-42b3d1e470a7" 00:06:46.922 ], 00:06:46.922 "product_name": "Malloc disk", 00:06:46.922 "block_size": 512, 00:06:46.922 "num_blocks": 16384, 00:06:46.922 "uuid": "62be960f-ad38-4cf0-8700-42b3d1e470a7", 00:06:46.922 "assigned_rate_limits": { 00:06:46.922 "rw_ios_per_sec": 0, 00:06:46.922 "rw_mbytes_per_sec": 0, 00:06:46.922 "r_mbytes_per_sec": 0, 00:06:46.922 "w_mbytes_per_sec": 0 00:06:46.922 }, 00:06:46.922 "claimed": true, 00:06:46.922 "claim_type": "exclusive_write", 00:06:46.922 "zoned": false, 00:06:46.922 "supported_io_types": { 00:06:46.922 "read": true, 00:06:46.922 "write": true, 00:06:46.922 "unmap": true, 00:06:46.922 "write_zeroes": true, 00:06:46.922 "flush": true, 00:06:46.922 "reset": true, 00:06:46.922 "compare": false, 00:06:46.922 "compare_and_write": false, 00:06:46.922 "abort": true, 00:06:46.922 "nvme_admin": false, 00:06:46.922 "nvme_io": false 00:06:46.922 }, 00:06:46.922 "memory_domains": [ 00:06:46.922 { 00:06:46.922 "dma_device_id": "system", 00:06:46.922 "dma_device_type": 1 00:06:46.922 }, 00:06:46.922 { 00:06:46.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.922 "dma_device_type": 2 00:06:46.922 } 00:06:46.922 ], 00:06:46.922 "driver_specific": {} 00:06:46.922 }, 00:06:46.922 { 00:06:46.922 "name": "Passthru0", 00:06:46.922 "aliases": [ 00:06:46.922 "2f43365c-788d-504d-b9f0-ae614933c455" 00:06:46.922 ], 00:06:46.922 "product_name": "passthru", 00:06:46.922 "block_size": 512, 00:06:46.922 "num_blocks": 16384, 00:06:46.922 "uuid": "2f43365c-788d-504d-b9f0-ae614933c455", 00:06:46.922 "assigned_rate_limits": { 00:06:46.922 "rw_ios_per_sec": 0, 00:06:46.922 "rw_mbytes_per_sec": 0, 00:06:46.922 "r_mbytes_per_sec": 0, 00:06:46.922 "w_mbytes_per_sec": 0 00:06:46.922 }, 00:06:46.922 "claimed": false, 00:06:46.922 "zoned": false, 00:06:46.922 "supported_io_types": { 00:06:46.922 "read": true, 00:06:46.922 "write": true, 00:06:46.922 "unmap": true, 00:06:46.922 "write_zeroes": true, 00:06:46.922 "flush": true, 00:06:46.922 "reset": true, 00:06:46.922 "compare": false, 00:06:46.922 "compare_and_write": false, 00:06:46.922 "abort": true, 00:06:46.922 "nvme_admin": false, 00:06:46.922 "nvme_io": false 00:06:46.922 }, 00:06:46.922 "memory_domains": [ 00:06:46.922 { 00:06:46.922 "dma_device_id": "system", 00:06:46.922 "dma_device_type": 1 00:06:46.922 }, 00:06:46.922 { 00:06:46.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.922 "dma_device_type": 2 00:06:46.922 } 00:06:46.922 ], 00:06:46.922 "driver_specific": { 00:06:46.922 "passthru": { 00:06:46.922 "name": "Passthru0", 00:06:46.922 "base_bdev_name": "Malloc0" 00:06:46.922 } 00:06:46.922 } 00:06:46.922 } 00:06:46.922 ]' 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:46.922 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:46.923 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.923 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.923 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.923 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:46.923 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.923 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.923 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.923 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:46.923 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.923 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.923 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.923 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:46.923 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:46.923 13:34:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:46.923 00:06:46.923 real 0m0.300s 00:06:46.923 user 0m0.194s 00:06:46.923 sys 0m0.038s 00:06:46.923 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.923 13:34:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.923 ************************************ 00:06:46.923 END TEST rpc_integrity 00:06:46.923 ************************************ 00:06:47.183 13:34:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:47.183 13:34:39 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:47.183 13:34:39 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:47.183 13:34:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.183 ************************************ 00:06:47.183 START TEST rpc_plugins 00:06:47.183 ************************************ 00:06:47.183 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:06:47.183 13:34:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:47.183 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.183 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.183 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.183 13:34:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:47.183 13:34:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:47.183 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.183 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.183 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.183 13:34:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:47.184 { 00:06:47.184 "name": "Malloc1", 00:06:47.184 "aliases": [ 00:06:47.184 "96f2d76c-9409-4656-91c1-bce7df4ad591" 00:06:47.184 ], 00:06:47.184 "product_name": "Malloc disk", 00:06:47.184 "block_size": 4096, 00:06:47.184 "num_blocks": 256, 00:06:47.184 "uuid": "96f2d76c-9409-4656-91c1-bce7df4ad591", 00:06:47.184 "assigned_rate_limits": { 00:06:47.184 "rw_ios_per_sec": 0, 00:06:47.184 "rw_mbytes_per_sec": 0, 00:06:47.184 "r_mbytes_per_sec": 0, 00:06:47.184 "w_mbytes_per_sec": 0 00:06:47.184 }, 00:06:47.184 "claimed": false, 00:06:47.184 "zoned": false, 00:06:47.184 "supported_io_types": { 00:06:47.184 "read": true, 00:06:47.184 "write": true, 00:06:47.184 "unmap": true, 00:06:47.184 "write_zeroes": true, 00:06:47.184 "flush": true, 00:06:47.184 "reset": true, 00:06:47.184 "compare": false, 00:06:47.184 "compare_and_write": false, 00:06:47.184 "abort": true, 00:06:47.184 "nvme_admin": false, 00:06:47.184 "nvme_io": false 00:06:47.184 }, 00:06:47.184 "memory_domains": [ 00:06:47.184 { 00:06:47.184 "dma_device_id": "system", 00:06:47.184 "dma_device_type": 1 00:06:47.184 }, 00:06:47.184 { 00:06:47.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.184 "dma_device_type": 2 00:06:47.184 } 00:06:47.184 ], 00:06:47.184 "driver_specific": {} 00:06:47.184 } 00:06:47.184 ]' 00:06:47.184 13:34:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:47.184 13:34:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:47.184 13:34:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:47.184 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.184 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.184 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.184 13:34:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:47.184 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.184 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.184 13:34:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.184 13:34:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:47.184 13:34:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:47.184 13:34:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:47.184 00:06:47.184 real 0m0.143s 00:06:47.184 user 0m0.090s 00:06:47.184 sys 0m0.017s 00:06:47.184 13:34:40 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:47.184 13:34:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.184 ************************************ 00:06:47.184 END TEST rpc_plugins 00:06:47.184 ************************************ 00:06:47.184 13:34:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:47.184 13:34:40 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:47.184 13:34:40 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:47.184 13:34:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.445 ************************************ 00:06:47.445 START TEST rpc_trace_cmd_test 00:06:47.445 ************************************ 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:47.445 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1896017", 00:06:47.445 "tpoint_group_mask": "0x8", 00:06:47.445 "iscsi_conn": { 00:06:47.445 "mask": "0x2", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "scsi": { 00:06:47.445 "mask": "0x4", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "bdev": { 00:06:47.445 "mask": "0x8", 00:06:47.445 "tpoint_mask": "0xffffffffffffffff" 00:06:47.445 }, 00:06:47.445 "nvmf_rdma": { 00:06:47.445 "mask": "0x10", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "nvmf_tcp": { 00:06:47.445 "mask": "0x20", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "ftl": { 00:06:47.445 "mask": "0x40", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "blobfs": { 00:06:47.445 "mask": "0x80", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "dsa": { 00:06:47.445 "mask": "0x200", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "thread": { 00:06:47.445 "mask": "0x400", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "nvme_pcie": { 00:06:47.445 "mask": "0x800", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "iaa": { 00:06:47.445 "mask": "0x1000", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "nvme_tcp": { 00:06:47.445 "mask": "0x2000", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "bdev_nvme": { 00:06:47.445 "mask": "0x4000", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 }, 00:06:47.445 "sock": { 00:06:47.445 "mask": "0x8000", 00:06:47.445 "tpoint_mask": "0x0" 00:06:47.445 } 00:06:47.445 }' 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:47.445 00:06:47.445 real 0m0.243s 00:06:47.445 user 0m0.204s 00:06:47.445 sys 0m0.029s 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:47.445 13:34:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.445 ************************************ 00:06:47.445 END TEST rpc_trace_cmd_test 00:06:47.445 ************************************ 00:06:47.707 13:34:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:47.707 13:34:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:47.707 13:34:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:47.707 13:34:40 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:47.707 13:34:40 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:47.707 13:34:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.707 ************************************ 00:06:47.707 START TEST rpc_daemon_integrity 00:06:47.707 ************************************ 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:47.707 { 00:06:47.707 "name": "Malloc2", 00:06:47.707 "aliases": [ 00:06:47.707 "a5749605-bee4-488b-8451-5e132e3d4626" 00:06:47.707 ], 00:06:47.707 "product_name": "Malloc disk", 00:06:47.707 "block_size": 512, 00:06:47.707 "num_blocks": 16384, 00:06:47.707 "uuid": "a5749605-bee4-488b-8451-5e132e3d4626", 00:06:47.707 "assigned_rate_limits": { 00:06:47.707 "rw_ios_per_sec": 0, 00:06:47.707 "rw_mbytes_per_sec": 0, 00:06:47.707 "r_mbytes_per_sec": 0, 00:06:47.707 "w_mbytes_per_sec": 0 00:06:47.707 }, 00:06:47.707 "claimed": false, 00:06:47.707 "zoned": false, 00:06:47.707 "supported_io_types": { 00:06:47.707 "read": true, 00:06:47.707 "write": true, 00:06:47.707 "unmap": true, 00:06:47.707 "write_zeroes": true, 00:06:47.707 "flush": true, 00:06:47.707 "reset": true, 00:06:47.707 "compare": false, 00:06:47.707 "compare_and_write": false, 00:06:47.707 "abort": true, 00:06:47.707 "nvme_admin": false, 00:06:47.707 "nvme_io": false 00:06:47.707 }, 00:06:47.707 "memory_domains": [ 00:06:47.707 { 00:06:47.707 "dma_device_id": "system", 00:06:47.707 "dma_device_type": 1 00:06:47.707 }, 00:06:47.707 { 00:06:47.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.707 "dma_device_type": 2 00:06:47.707 } 00:06:47.707 ], 00:06:47.707 "driver_specific": {} 00:06:47.707 } 00:06:47.707 ]' 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.707 [2024-06-11 13:34:40.564749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:47.707 [2024-06-11 13:34:40.564781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.707 [2024-06-11 13:34:40.564793] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10f3090 00:06:47.707 [2024-06-11 13:34:40.564800] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.707 [2024-06-11 13:34:40.566029] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.707 [2024-06-11 13:34:40.566048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:47.707 Passthru0 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:47.707 { 00:06:47.707 "name": "Malloc2", 00:06:47.707 "aliases": [ 00:06:47.707 "a5749605-bee4-488b-8451-5e132e3d4626" 00:06:47.707 ], 00:06:47.707 "product_name": "Malloc disk", 00:06:47.707 "block_size": 512, 00:06:47.707 "num_blocks": 16384, 00:06:47.707 "uuid": "a5749605-bee4-488b-8451-5e132e3d4626", 00:06:47.707 "assigned_rate_limits": { 00:06:47.707 "rw_ios_per_sec": 0, 00:06:47.707 "rw_mbytes_per_sec": 0, 00:06:47.707 "r_mbytes_per_sec": 0, 00:06:47.707 "w_mbytes_per_sec": 0 00:06:47.707 }, 00:06:47.707 "claimed": true, 00:06:47.707 "claim_type": "exclusive_write", 00:06:47.707 "zoned": false, 00:06:47.707 "supported_io_types": { 00:06:47.707 "read": true, 00:06:47.707 "write": true, 00:06:47.707 "unmap": true, 00:06:47.707 "write_zeroes": true, 00:06:47.707 "flush": true, 00:06:47.707 "reset": true, 00:06:47.707 "compare": false, 00:06:47.707 "compare_and_write": false, 00:06:47.707 "abort": true, 00:06:47.707 "nvme_admin": false, 00:06:47.707 "nvme_io": false 00:06:47.707 }, 00:06:47.707 "memory_domains": [ 00:06:47.707 { 00:06:47.707 "dma_device_id": "system", 00:06:47.707 "dma_device_type": 1 00:06:47.707 }, 00:06:47.707 { 00:06:47.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.707 "dma_device_type": 2 00:06:47.707 } 00:06:47.707 ], 00:06:47.707 "driver_specific": {} 00:06:47.707 }, 00:06:47.707 { 00:06:47.707 "name": "Passthru0", 00:06:47.707 "aliases": [ 00:06:47.707 "474e02bb-8d4f-5063-9976-c7b8294402af" 00:06:47.707 ], 00:06:47.707 "product_name": "passthru", 00:06:47.707 "block_size": 512, 00:06:47.707 "num_blocks": 16384, 00:06:47.707 "uuid": "474e02bb-8d4f-5063-9976-c7b8294402af", 00:06:47.707 "assigned_rate_limits": { 00:06:47.707 "rw_ios_per_sec": 0, 00:06:47.707 "rw_mbytes_per_sec": 0, 00:06:47.707 "r_mbytes_per_sec": 0, 00:06:47.707 "w_mbytes_per_sec": 0 00:06:47.707 }, 00:06:47.707 "claimed": false, 00:06:47.707 "zoned": false, 00:06:47.707 "supported_io_types": { 00:06:47.707 "read": true, 00:06:47.707 "write": true, 00:06:47.707 "unmap": true, 00:06:47.707 "write_zeroes": true, 00:06:47.707 "flush": true, 00:06:47.707 "reset": true, 00:06:47.707 "compare": false, 00:06:47.707 "compare_and_write": false, 00:06:47.707 "abort": true, 00:06:47.707 "nvme_admin": false, 00:06:47.707 "nvme_io": false 00:06:47.707 }, 00:06:47.707 "memory_domains": [ 00:06:47.707 { 00:06:47.707 "dma_device_id": "system", 00:06:47.707 "dma_device_type": 1 00:06:47.707 }, 00:06:47.707 { 00:06:47.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.707 "dma_device_type": 2 00:06:47.707 } 00:06:47.707 ], 00:06:47.707 "driver_specific": { 00:06:47.707 "passthru": { 00:06:47.707 "name": "Passthru0", 00:06:47.707 "base_bdev_name": "Malloc2" 00:06:47.707 } 00:06:47.707 } 00:06:47.707 } 00:06:47.707 ]' 00:06:47.707 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:47.968 00:06:47.968 real 0m0.290s 00:06:47.968 user 0m0.190s 00:06:47.968 sys 0m0.035s 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:47.968 13:34:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.968 ************************************ 00:06:47.968 END TEST rpc_daemon_integrity 00:06:47.968 ************************************ 00:06:47.968 13:34:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:47.968 13:34:40 rpc -- rpc/rpc.sh@84 -- # killprocess 1896017 00:06:47.968 13:34:40 rpc -- common/autotest_common.sh@949 -- # '[' -z 1896017 ']' 00:06:47.968 13:34:40 rpc -- common/autotest_common.sh@953 -- # kill -0 1896017 00:06:47.968 13:34:40 rpc -- common/autotest_common.sh@954 -- # uname 00:06:47.968 13:34:40 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:47.968 13:34:40 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1896017 00:06:47.968 13:34:40 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:47.968 13:34:40 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:47.968 13:34:40 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1896017' 00:06:47.968 killing process with pid 1896017 00:06:47.968 13:34:40 rpc -- common/autotest_common.sh@968 -- # kill 1896017 00:06:47.968 13:34:40 rpc -- common/autotest_common.sh@973 -- # wait 1896017 00:06:48.254 00:06:48.254 real 0m2.462s 00:06:48.254 user 0m3.237s 00:06:48.254 sys 0m0.686s 00:06:48.254 13:34:41 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:48.254 13:34:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.254 ************************************ 00:06:48.254 END TEST rpc 00:06:48.254 ************************************ 00:06:48.254 13:34:41 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:48.254 13:34:41 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:48.254 13:34:41 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.254 13:34:41 -- common/autotest_common.sh@10 -- # set +x 00:06:48.254 ************************************ 00:06:48.254 START TEST skip_rpc 00:06:48.254 ************************************ 00:06:48.254 13:34:41 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:48.521 * Looking for test storage... 00:06:48.521 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:48.521 13:34:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:48.521 13:34:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:48.521 13:34:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:48.521 13:34:41 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:48.521 13:34:41 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.521 13:34:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.521 ************************************ 00:06:48.521 START TEST skip_rpc 00:06:48.521 ************************************ 00:06:48.521 13:34:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:06:48.521 13:34:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1896844 00:06:48.521 13:34:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.521 13:34:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:48.521 13:34:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:48.521 [2024-06-11 13:34:41.290323] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:48.521 [2024-06-11 13:34:41.290383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896844 ] 00:06:48.521 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.521 [2024-06-11 13:34:41.356909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.521 [2024-06-11 13:34:41.431064] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1896844 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 1896844 ']' 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 1896844 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1896844 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1896844' 00:06:53.809 killing process with pid 1896844 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 1896844 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 1896844 00:06:53.809 00:06:53.809 real 0m5.276s 00:06:53.809 user 0m5.081s 00:06:53.809 sys 0m0.234s 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:53.809 13:34:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.809 ************************************ 00:06:53.809 END TEST skip_rpc 00:06:53.809 ************************************ 00:06:53.809 13:34:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:53.809 13:34:46 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:53.809 13:34:46 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:53.809 13:34:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.809 ************************************ 00:06:53.809 START TEST skip_rpc_with_json 00:06:53.809 ************************************ 00:06:53.809 13:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:06:53.809 13:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:53.809 13:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1897885 00:06:53.809 13:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.809 13:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1897885 00:06:53.809 13:34:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.809 13:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 1897885 ']' 00:06:53.809 13:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.809 13:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:53.810 13:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.810 13:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:53.810 13:34:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.810 [2024-06-11 13:34:46.639249] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:53.810 [2024-06-11 13:34:46.639298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897885 ] 00:06:53.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.810 [2024-06-11 13:34:46.699842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.070 [2024-06-11 13:34:46.766119] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.642 [2024-06-11 13:34:47.402617] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:54.642 request: 00:06:54.642 { 00:06:54.642 "trtype": "tcp", 00:06:54.642 "method": "nvmf_get_transports", 00:06:54.642 "req_id": 1 00:06:54.642 } 00:06:54.642 Got JSON-RPC error response 00:06:54.642 response: 00:06:54.642 { 00:06:54.642 "code": -19, 00:06:54.642 "message": "No such device" 00:06:54.642 } 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.642 [2024-06-11 13:34:47.414728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:54.642 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.904 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:54.904 13:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:54.904 { 00:06:54.904 "subsystems": [ 00:06:54.904 { 00:06:54.904 "subsystem": "keyring", 00:06:54.904 "config": [] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "iobuf", 00:06:54.904 "config": [ 00:06:54.904 { 00:06:54.904 "method": "iobuf_set_options", 00:06:54.904 "params": { 00:06:54.904 "small_pool_count": 8192, 00:06:54.904 "large_pool_count": 1024, 00:06:54.904 "small_bufsize": 8192, 00:06:54.904 "large_bufsize": 135168 00:06:54.904 } 00:06:54.904 } 00:06:54.904 ] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "sock", 00:06:54.904 "config": [ 00:06:54.904 { 00:06:54.904 "method": "sock_set_default_impl", 00:06:54.904 "params": { 00:06:54.904 "impl_name": "posix" 00:06:54.904 } 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "method": "sock_impl_set_options", 00:06:54.904 "params": { 00:06:54.904 "impl_name": "ssl", 00:06:54.904 "recv_buf_size": 4096, 00:06:54.904 "send_buf_size": 4096, 00:06:54.904 "enable_recv_pipe": true, 00:06:54.904 "enable_quickack": false, 00:06:54.904 "enable_placement_id": 0, 00:06:54.904 "enable_zerocopy_send_server": true, 00:06:54.904 "enable_zerocopy_send_client": false, 00:06:54.904 "zerocopy_threshold": 0, 00:06:54.904 "tls_version": 0, 00:06:54.904 "enable_ktls": false 00:06:54.904 } 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "method": "sock_impl_set_options", 00:06:54.904 "params": { 00:06:54.904 "impl_name": "posix", 00:06:54.904 "recv_buf_size": 2097152, 00:06:54.904 "send_buf_size": 2097152, 00:06:54.904 "enable_recv_pipe": true, 00:06:54.904 "enable_quickack": false, 00:06:54.904 "enable_placement_id": 0, 00:06:54.904 "enable_zerocopy_send_server": true, 00:06:54.904 "enable_zerocopy_send_client": false, 00:06:54.904 "zerocopy_threshold": 0, 00:06:54.904 "tls_version": 0, 00:06:54.904 "enable_ktls": false 00:06:54.904 } 00:06:54.904 } 00:06:54.904 ] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "vmd", 00:06:54.904 "config": [] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "accel", 00:06:54.904 "config": [ 00:06:54.904 { 00:06:54.904 "method": "accel_set_options", 00:06:54.904 "params": { 00:06:54.904 "small_cache_size": 128, 00:06:54.904 "large_cache_size": 16, 00:06:54.904 "task_count": 2048, 00:06:54.904 "sequence_count": 2048, 00:06:54.904 "buf_count": 2048 00:06:54.904 } 00:06:54.904 } 00:06:54.904 ] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "bdev", 00:06:54.904 "config": [ 00:06:54.904 { 00:06:54.904 "method": "bdev_set_options", 00:06:54.904 "params": { 00:06:54.904 "bdev_io_pool_size": 65535, 00:06:54.904 "bdev_io_cache_size": 256, 00:06:54.904 "bdev_auto_examine": true, 00:06:54.904 "iobuf_small_cache_size": 128, 00:06:54.904 "iobuf_large_cache_size": 16 00:06:54.904 } 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "method": "bdev_raid_set_options", 00:06:54.904 "params": { 00:06:54.904 "process_window_size_kb": 1024 00:06:54.904 } 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "method": "bdev_iscsi_set_options", 00:06:54.904 "params": { 00:06:54.904 "timeout_sec": 30 00:06:54.904 } 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "method": "bdev_nvme_set_options", 00:06:54.904 "params": { 00:06:54.904 "action_on_timeout": "none", 00:06:54.904 "timeout_us": 0, 00:06:54.904 "timeout_admin_us": 0, 00:06:54.904 "keep_alive_timeout_ms": 10000, 00:06:54.904 "arbitration_burst": 0, 00:06:54.904 "low_priority_weight": 0, 00:06:54.904 "medium_priority_weight": 0, 00:06:54.904 "high_priority_weight": 0, 00:06:54.904 "nvme_adminq_poll_period_us": 10000, 00:06:54.904 "nvme_ioq_poll_period_us": 0, 00:06:54.904 "io_queue_requests": 0, 00:06:54.904 "delay_cmd_submit": true, 00:06:54.904 "transport_retry_count": 4, 00:06:54.904 "bdev_retry_count": 3, 00:06:54.904 "transport_ack_timeout": 0, 00:06:54.904 "ctrlr_loss_timeout_sec": 0, 00:06:54.904 "reconnect_delay_sec": 0, 00:06:54.904 "fast_io_fail_timeout_sec": 0, 00:06:54.904 "disable_auto_failback": false, 00:06:54.904 "generate_uuids": false, 00:06:54.904 "transport_tos": 0, 00:06:54.904 "nvme_error_stat": false, 00:06:54.904 "rdma_srq_size": 0, 00:06:54.904 "io_path_stat": false, 00:06:54.904 "allow_accel_sequence": false, 00:06:54.904 "rdma_max_cq_size": 0, 00:06:54.904 "rdma_cm_event_timeout_ms": 0, 00:06:54.904 "dhchap_digests": [ 00:06:54.904 "sha256", 00:06:54.904 "sha384", 00:06:54.904 "sha512" 00:06:54.904 ], 00:06:54.904 "dhchap_dhgroups": [ 00:06:54.904 "null", 00:06:54.904 "ffdhe2048", 00:06:54.904 "ffdhe3072", 00:06:54.904 "ffdhe4096", 00:06:54.904 "ffdhe6144", 00:06:54.904 "ffdhe8192" 00:06:54.904 ] 00:06:54.904 } 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "method": "bdev_nvme_set_hotplug", 00:06:54.904 "params": { 00:06:54.904 "period_us": 100000, 00:06:54.904 "enable": false 00:06:54.904 } 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "method": "bdev_wait_for_examine" 00:06:54.904 } 00:06:54.904 ] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "scsi", 00:06:54.904 "config": null 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "scheduler", 00:06:54.904 "config": [ 00:06:54.904 { 00:06:54.904 "method": "framework_set_scheduler", 00:06:54.904 "params": { 00:06:54.904 "name": "static" 00:06:54.904 } 00:06:54.904 } 00:06:54.904 ] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "vhost_scsi", 00:06:54.904 "config": [] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "vhost_blk", 00:06:54.904 "config": [] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "ublk", 00:06:54.904 "config": [] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "nbd", 00:06:54.904 "config": [] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "nvmf", 00:06:54.904 "config": [ 00:06:54.904 { 00:06:54.904 "method": "nvmf_set_config", 00:06:54.904 "params": { 00:06:54.904 "discovery_filter": "match_any", 00:06:54.904 "admin_cmd_passthru": { 00:06:54.904 "identify_ctrlr": false 00:06:54.904 } 00:06:54.904 } 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "method": "nvmf_set_max_subsystems", 00:06:54.904 "params": { 00:06:54.904 "max_subsystems": 1024 00:06:54.904 } 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "method": "nvmf_set_crdt", 00:06:54.904 "params": { 00:06:54.904 "crdt1": 0, 00:06:54.904 "crdt2": 0, 00:06:54.904 "crdt3": 0 00:06:54.904 } 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "method": "nvmf_create_transport", 00:06:54.904 "params": { 00:06:54.904 "trtype": "TCP", 00:06:54.904 "max_queue_depth": 128, 00:06:54.904 "max_io_qpairs_per_ctrlr": 127, 00:06:54.904 "in_capsule_data_size": 4096, 00:06:54.904 "max_io_size": 131072, 00:06:54.904 "io_unit_size": 131072, 00:06:54.904 "max_aq_depth": 128, 00:06:54.904 "num_shared_buffers": 511, 00:06:54.904 "buf_cache_size": 4294967295, 00:06:54.904 "dif_insert_or_strip": false, 00:06:54.904 "zcopy": false, 00:06:54.904 "c2h_success": true, 00:06:54.904 "sock_priority": 0, 00:06:54.904 "abort_timeout_sec": 1, 00:06:54.904 "ack_timeout": 0, 00:06:54.904 "data_wr_pool_size": 0 00:06:54.904 } 00:06:54.904 } 00:06:54.904 ] 00:06:54.904 }, 00:06:54.904 { 00:06:54.904 "subsystem": "iscsi", 00:06:54.904 "config": [ 00:06:54.904 { 00:06:54.904 "method": "iscsi_set_options", 00:06:54.904 "params": { 00:06:54.904 "node_base": "iqn.2016-06.io.spdk", 00:06:54.904 "max_sessions": 128, 00:06:54.904 "max_connections_per_session": 2, 00:06:54.905 "max_queue_depth": 64, 00:06:54.905 "default_time2wait": 2, 00:06:54.905 "default_time2retain": 20, 00:06:54.905 "first_burst_length": 8192, 00:06:54.905 "immediate_data": true, 00:06:54.905 "allow_duplicated_isid": false, 00:06:54.905 "error_recovery_level": 0, 00:06:54.905 "nop_timeout": 60, 00:06:54.905 "nop_in_interval": 30, 00:06:54.905 "disable_chap": false, 00:06:54.905 "require_chap": false, 00:06:54.905 "mutual_chap": false, 00:06:54.905 "chap_group": 0, 00:06:54.905 "max_large_datain_per_connection": 64, 00:06:54.905 "max_r2t_per_connection": 4, 00:06:54.905 "pdu_pool_size": 36864, 00:06:54.905 "immediate_data_pool_size": 16384, 00:06:54.905 "data_out_pool_size": 2048 00:06:54.905 } 00:06:54.905 } 00:06:54.905 ] 00:06:54.905 } 00:06:54.905 ] 00:06:54.905 } 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1897885 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1897885 ']' 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1897885 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1897885 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1897885' 00:06:54.905 killing process with pid 1897885 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1897885 00:06:54.905 13:34:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1897885 00:06:55.167 13:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1898228 00:06:55.167 13:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:55.167 13:34:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:07:00.456 13:34:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1898228 00:07:00.456 13:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1898228 ']' 00:07:00.456 13:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1898228 00:07:00.456 13:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:07:00.456 13:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:00.456 13:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1898228 00:07:00.456 13:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:00.456 13:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:00.456 13:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1898228' 00:07:00.456 killing process with pid 1898228 00:07:00.456 13:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1898228 00:07:00.456 13:34:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1898228 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:07:00.457 00:07:00.457 real 0m6.523s 00:07:00.457 user 0m6.419s 00:07:00.457 sys 0m0.511s 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:00.457 ************************************ 00:07:00.457 END TEST skip_rpc_with_json 00:07:00.457 ************************************ 00:07:00.457 13:34:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:00.457 13:34:53 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:00.457 13:34:53 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:00.457 13:34:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.457 ************************************ 00:07:00.457 START TEST skip_rpc_with_delay 00:07:00.457 ************************************ 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.457 [2024-06-11 13:34:53.244143] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:00.457 [2024-06-11 13:34:53.244224] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:00.457 00:07:00.457 real 0m0.070s 00:07:00.457 user 0m0.045s 00:07:00.457 sys 0m0.024s 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:00.457 13:34:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:00.457 ************************************ 00:07:00.457 END TEST skip_rpc_with_delay 00:07:00.457 ************************************ 00:07:00.457 13:34:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:00.457 13:34:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:00.457 13:34:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:00.457 13:34:53 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:00.457 13:34:53 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:00.457 13:34:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.457 ************************************ 00:07:00.457 START TEST exit_on_failed_rpc_init 00:07:00.457 ************************************ 00:07:00.457 13:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:07:00.457 13:34:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1899291 00:07:00.457 13:34:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1899291 00:07:00.457 13:34:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.457 13:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 1899291 ']' 00:07:00.457 13:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.457 13:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:00.457 13:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.457 13:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:00.457 13:34:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:00.716 [2024-06-11 13:34:53.388937] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:00.716 [2024-06-11 13:34:53.388983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899291 ] 00:07:00.716 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.716 [2024-06-11 13:34:53.449198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.716 [2024-06-11 13:34:53.514738] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:01.286 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:01.547 [2024-06-11 13:34:54.203187] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:01.547 [2024-06-11 13:34:54.203239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899475 ] 00:07:01.547 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.547 [2024-06-11 13:34:54.277866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.547 [2024-06-11 13:34:54.342319] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.547 [2024-06-11 13:34:54.342378] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:01.547 [2024-06-11 13:34:54.342388] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:01.547 [2024-06-11 13:34:54.342395] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1899291 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 1899291 ']' 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 1899291 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1899291 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1899291' 00:07:01.547 killing process with pid 1899291 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 1899291 00:07:01.547 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 1899291 00:07:01.807 00:07:01.807 real 0m1.328s 00:07:01.807 user 0m1.565s 00:07:01.807 sys 0m0.356s 00:07:01.807 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.807 13:34:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:01.807 ************************************ 00:07:01.807 END TEST exit_on_failed_rpc_init 00:07:01.807 ************************************ 00:07:01.807 13:34:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:07:01.807 00:07:01.807 real 0m13.612s 00:07:01.807 user 0m13.294s 00:07:01.807 sys 0m1.380s 00:07:01.807 13:34:54 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.807 13:34:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.807 ************************************ 00:07:01.807 END TEST skip_rpc 00:07:01.807 ************************************ 00:07:02.068 13:34:54 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:02.068 13:34:54 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:02.068 13:34:54 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.068 13:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:02.068 ************************************ 00:07:02.068 START TEST rpc_client 00:07:02.068 ************************************ 00:07:02.068 13:34:54 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:02.068 * Looking for test storage... 00:07:02.068 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:07:02.068 13:34:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:02.068 OK 00:07:02.068 13:34:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:02.068 00:07:02.068 real 0m0.123s 00:07:02.068 user 0m0.049s 00:07:02.068 sys 0m0.080s 00:07:02.068 13:34:54 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.068 13:34:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:02.068 ************************************ 00:07:02.068 END TEST rpc_client 00:07:02.068 ************************************ 00:07:02.068 13:34:54 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:07:02.068 13:34:54 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:02.068 13:34:54 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.068 13:34:54 -- common/autotest_common.sh@10 -- # set +x 00:07:02.068 ************************************ 00:07:02.068 START TEST json_config 00:07:02.068 ************************************ 00:07:02.068 13:34:54 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:02.329 13:34:55 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.329 13:34:55 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.329 13:34:55 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.329 13:34:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.329 13:34:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.329 13:34:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.329 13:34:55 json_config -- paths/export.sh@5 -- # export PATH 00:07:02.329 13:34:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@47 -- # : 0 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.329 13:34:55 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:07:02.329 INFO: JSON configuration test init 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:07:02.329 13:34:55 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:02.329 13:34:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:07:02.329 13:34:55 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:02.329 13:34:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.329 13:34:55 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:07:02.329 13:34:55 json_config -- json_config/common.sh@9 -- # local app=target 00:07:02.329 13:34:55 json_config -- json_config/common.sh@10 -- # shift 00:07:02.329 13:34:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:02.329 13:34:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:02.329 13:34:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:02.329 13:34:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:02.329 13:34:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:02.329 13:34:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1899743 00:07:02.329 13:34:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:02.329 Waiting for target to run... 00:07:02.329 13:34:55 json_config -- json_config/common.sh@25 -- # waitforlisten 1899743 /var/tmp/spdk_tgt.sock 00:07:02.329 13:34:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:02.329 13:34:55 json_config -- common/autotest_common.sh@830 -- # '[' -z 1899743 ']' 00:07:02.329 13:34:55 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:02.329 13:34:55 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:02.329 13:34:55 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:02.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:02.329 13:34:55 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:02.329 13:34:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.329 [2024-06-11 13:34:55.144031] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:02.329 [2024-06-11 13:34:55.144084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899743 ] 00:07:02.329 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.589 [2024-06-11 13:34:55.407688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.589 [2024-06-11 13:34:55.463030] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.159 13:34:55 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:03.159 13:34:55 json_config -- common/autotest_common.sh@863 -- # return 0 00:07:03.159 13:34:55 json_config -- json_config/common.sh@26 -- # echo '' 00:07:03.159 00:07:03.159 13:34:55 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:07:03.159 13:34:55 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:07:03.159 13:34:55 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:03.159 13:34:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.159 13:34:55 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:07:03.159 13:34:55 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:07:03.159 13:34:55 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:03.159 13:34:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.159 13:34:55 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:03.159 13:34:55 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:07:03.159 13:34:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:03.729 13:34:56 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:07:03.729 13:34:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:03.729 13:34:56 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:03.729 13:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.729 13:34:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:03.729 13:34:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:03.729 13:34:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:03.729 13:34:56 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:03.729 13:34:56 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:03.729 13:34:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:07:03.989 13:34:56 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:03.989 13:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@55 -- # return 0 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:07:03.989 13:34:56 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:03.989 13:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:07:03.989 13:34:56 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:07:03.989 13:34:56 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:03.989 13:34:56 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.989 13:34:56 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:03.989 13:34:56 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:03.989 13:34:56 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:03.989 13:34:56 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.989 13:34:56 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:07:03.989 13:34:56 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.989 13:34:56 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:07:03.989 13:34:56 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:03.989 13:34:56 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:07:03.989 13:34:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@296 -- # e810=() 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@297 -- # x722=() 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@298 -- # mlx=() 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:07:10.569 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:07:10.569 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:07:10.569 Found net devices under 0000:98:00.0: mlx_0_0 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:07:10.569 Found net devices under 0000:98:00.1: mlx_0_1 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@58 -- # uname 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:10.569 13:35:03 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:10.570 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:10.570 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:07:10.570 altname enp152s0f0np0 00:07:10.570 altname ens817f0np0 00:07:10.570 inet 192.168.100.8/24 scope global mlx_0_0 00:07:10.570 valid_lft forever preferred_lft forever 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:10.570 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:10.570 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:07:10.570 altname enp152s0f1np1 00:07:10.570 altname ens817f1np1 00:07:10.570 inet 192.168.100.9/24 scope global mlx_0_1 00:07:10.570 valid_lft forever preferred_lft forever 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@422 -- # return 0 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:10.570 13:35:03 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.830 13:35:03 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@105 -- # continue 2 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:10.831 192.168.100.9' 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:10.831 192.168.100.9' 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@457 -- # head -n 1 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:10.831 192.168.100.9' 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@458 -- # head -n 1 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:10.831 13:35:03 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:10.831 13:35:03 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:07:10.831 13:35:03 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:10.831 13:35:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:10.831 MallocForNvmf0 00:07:11.091 13:35:03 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:11.091 13:35:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:11.091 MallocForNvmf1 00:07:11.091 13:35:03 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:07:11.091 13:35:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:07:11.350 [2024-06-11 13:35:04.040991] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:11.350 [2024-06-11 13:35:04.075238] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dfc140/0x1e08f00) succeed. 00:07:11.350 [2024-06-11 13:35:04.089421] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1dfe330/0x1e88f80) succeed. 00:07:11.350 13:35:04 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:11.350 13:35:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:11.610 13:35:04 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:11.610 13:35:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:11.610 13:35:04 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:11.610 13:35:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:11.871 13:35:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:11.871 13:35:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:11.871 [2024-06-11 13:35:04.726862] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:11.871 13:35:04 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:07:11.871 13:35:04 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:11.871 13:35:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.871 13:35:04 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:07:11.871 13:35:04 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:11.871 13:35:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.132 13:35:04 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:07:12.132 13:35:04 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:12.132 13:35:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:12.132 MallocBdevForConfigChangeCheck 00:07:12.132 13:35:04 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:07:12.132 13:35:04 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:12.132 13:35:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.132 13:35:05 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:07:12.132 13:35:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:12.705 13:35:05 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:07:12.705 INFO: shutting down applications... 00:07:12.705 13:35:05 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:07:12.705 13:35:05 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:07:12.705 13:35:05 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:07:12.705 13:35:05 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:12.966 Calling clear_iscsi_subsystem 00:07:12.966 Calling clear_nvmf_subsystem 00:07:12.966 Calling clear_nbd_subsystem 00:07:12.966 Calling clear_ublk_subsystem 00:07:12.966 Calling clear_vhost_blk_subsystem 00:07:12.966 Calling clear_vhost_scsi_subsystem 00:07:12.966 Calling clear_bdev_subsystem 00:07:12.966 13:35:05 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:07:12.966 13:35:05 json_config -- json_config/json_config.sh@343 -- # count=100 00:07:12.966 13:35:05 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:07:12.966 13:35:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:12.966 13:35:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:12.966 13:35:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:13.227 13:35:06 json_config -- json_config/json_config.sh@345 -- # break 00:07:13.227 13:35:06 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:07:13.227 13:35:06 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:07:13.227 13:35:06 json_config -- json_config/common.sh@31 -- # local app=target 00:07:13.227 13:35:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:13.227 13:35:06 json_config -- json_config/common.sh@35 -- # [[ -n 1899743 ]] 00:07:13.227 13:35:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1899743 00:07:13.227 13:35:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:13.227 13:35:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:13.227 13:35:06 json_config -- json_config/common.sh@41 -- # kill -0 1899743 00:07:13.227 13:35:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:13.798 13:35:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:13.798 13:35:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:13.798 13:35:06 json_config -- json_config/common.sh@41 -- # kill -0 1899743 00:07:13.798 13:35:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:13.798 13:35:06 json_config -- json_config/common.sh@43 -- # break 00:07:13.798 13:35:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:13.798 13:35:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:13.798 SPDK target shutdown done 00:07:13.798 13:35:06 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:07:13.798 INFO: relaunching applications... 00:07:13.798 13:35:06 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:13.798 13:35:06 json_config -- json_config/common.sh@9 -- # local app=target 00:07:13.798 13:35:06 json_config -- json_config/common.sh@10 -- # shift 00:07:13.798 13:35:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:13.798 13:35:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:13.798 13:35:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:13.798 13:35:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:13.798 13:35:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:13.798 13:35:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1904532 00:07:13.798 13:35:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:13.798 Waiting for target to run... 00:07:13.798 13:35:06 json_config -- json_config/common.sh@25 -- # waitforlisten 1904532 /var/tmp/spdk_tgt.sock 00:07:13.798 13:35:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:13.798 13:35:06 json_config -- common/autotest_common.sh@830 -- # '[' -z 1904532 ']' 00:07:13.798 13:35:06 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:13.798 13:35:06 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:13.798 13:35:06 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:13.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:13.798 13:35:06 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:13.798 13:35:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.798 [2024-06-11 13:35:06.643526] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:13.798 [2024-06-11 13:35:06.643581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904532 ] 00:07:13.798 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.370 [2024-06-11 13:35:07.043740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.370 [2024-06-11 13:35:07.104912] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.941 [2024-06-11 13:35:07.629266] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24ff9e0/0x2365680) succeed. 00:07:14.941 [2024-06-11 13:35:07.642711] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24ffbb0/0x23e5700) succeed. 00:07:14.942 [2024-06-11 13:35:07.698922] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:14.942 13:35:07 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:14.942 13:35:07 json_config -- common/autotest_common.sh@863 -- # return 0 00:07:14.942 13:35:07 json_config -- json_config/common.sh@26 -- # echo '' 00:07:14.942 00:07:14.942 13:35:07 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:07:14.942 13:35:07 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:14.942 INFO: Checking if target configuration is the same... 00:07:14.942 13:35:07 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:14.942 13:35:07 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:07:14.942 13:35:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:14.942 + '[' 2 -ne 2 ']' 00:07:14.942 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:14.942 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:14.942 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:14.942 +++ basename /dev/fd/62 00:07:14.942 ++ mktemp /tmp/62.XXX 00:07:14.942 + tmp_file_1=/tmp/62.oex 00:07:14.942 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:14.942 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:14.942 + tmp_file_2=/tmp/spdk_tgt_config.json.bsr 00:07:14.942 + ret=0 00:07:14.942 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:15.201 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:15.201 + diff -u /tmp/62.oex /tmp/spdk_tgt_config.json.bsr 00:07:15.201 + echo 'INFO: JSON config files are the same' 00:07:15.201 INFO: JSON config files are the same 00:07:15.201 + rm /tmp/62.oex /tmp/spdk_tgt_config.json.bsr 00:07:15.201 + exit 0 00:07:15.201 13:35:08 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:07:15.201 13:35:08 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:15.201 INFO: changing configuration and checking if this can be detected... 00:07:15.201 13:35:08 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:15.201 13:35:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:15.462 13:35:08 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:15.462 13:35:08 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:07:15.462 13:35:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:15.462 + '[' 2 -ne 2 ']' 00:07:15.462 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:15.462 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:07:15.462 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:15.462 +++ basename /dev/fd/62 00:07:15.462 ++ mktemp /tmp/62.XXX 00:07:15.462 + tmp_file_1=/tmp/62.gC6 00:07:15.462 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:15.462 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:15.462 + tmp_file_2=/tmp/spdk_tgt_config.json.y5v 00:07:15.462 + ret=0 00:07:15.462 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:15.723 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:15.723 + diff -u /tmp/62.gC6 /tmp/spdk_tgt_config.json.y5v 00:07:15.723 + ret=1 00:07:15.723 + echo '=== Start of file: /tmp/62.gC6 ===' 00:07:15.723 + cat /tmp/62.gC6 00:07:15.723 + echo '=== End of file: /tmp/62.gC6 ===' 00:07:15.723 + echo '' 00:07:15.723 + echo '=== Start of file: /tmp/spdk_tgt_config.json.y5v ===' 00:07:15.723 + cat /tmp/spdk_tgt_config.json.y5v 00:07:15.723 + echo '=== End of file: /tmp/spdk_tgt_config.json.y5v ===' 00:07:15.723 + echo '' 00:07:15.723 + rm /tmp/62.gC6 /tmp/spdk_tgt_config.json.y5v 00:07:15.723 + exit 1 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:07:15.723 INFO: configuration change detected. 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:07:15.723 13:35:08 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:15.723 13:35:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@317 -- # [[ -n 1904532 ]] 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:07:15.723 13:35:08 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:15.723 13:35:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@193 -- # uname -s 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:07:15.723 13:35:08 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:07:15.723 13:35:08 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:15.723 13:35:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:15.984 13:35:08 json_config -- json_config/json_config.sh@323 -- # killprocess 1904532 00:07:15.984 13:35:08 json_config -- common/autotest_common.sh@949 -- # '[' -z 1904532 ']' 00:07:15.984 13:35:08 json_config -- common/autotest_common.sh@953 -- # kill -0 1904532 00:07:15.984 13:35:08 json_config -- common/autotest_common.sh@954 -- # uname 00:07:15.984 13:35:08 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:15.984 13:35:08 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1904532 00:07:15.984 13:35:08 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:15.984 13:35:08 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:15.984 13:35:08 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1904532' 00:07:15.984 killing process with pid 1904532 00:07:15.984 13:35:08 json_config -- common/autotest_common.sh@968 -- # kill 1904532 00:07:15.984 13:35:08 json_config -- common/autotest_common.sh@973 -- # wait 1904532 00:07:16.245 13:35:09 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:07:16.245 13:35:09 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:07:16.245 13:35:09 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:16.245 13:35:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 13:35:09 json_config -- json_config/json_config.sh@328 -- # return 0 00:07:16.245 13:35:09 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:07:16.245 INFO: Success 00:07:16.245 13:35:09 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:07:16.245 13:35:09 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:16.245 13:35:09 json_config -- nvmf/common.sh@117 -- # sync 00:07:16.245 13:35:09 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:07:16.245 13:35:09 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:07:16.245 13:35:09 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:16.245 13:35:09 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:16.245 13:35:09 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:07:16.245 00:07:16.245 real 0m14.099s 00:07:16.245 user 0m17.442s 00:07:16.245 sys 0m6.795s 00:07:16.245 13:35:09 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:16.245 13:35:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 ************************************ 00:07:16.245 END TEST json_config 00:07:16.245 ************************************ 00:07:16.245 13:35:09 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:16.245 13:35:09 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:16.245 13:35:09 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:16.245 13:35:09 -- common/autotest_common.sh@10 -- # set +x 00:07:16.245 ************************************ 00:07:16.245 START TEST json_config_extra_key 00:07:16.245 ************************************ 00:07:16.245 13:35:09 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:16.507 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.507 13:35:09 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:16.507 13:35:09 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.507 13:35:09 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.507 13:35:09 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.507 13:35:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.507 13:35:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.508 13:35:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.508 13:35:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:16.508 13:35:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.508 13:35:09 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:16.508 13:35:09 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:16.508 13:35:09 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:16.508 13:35:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.508 13:35:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.508 13:35:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.508 13:35:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:16.508 13:35:09 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:16.508 13:35:09 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:16.508 INFO: launching applications... 00:07:16.508 13:35:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:16.508 13:35:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:16.508 13:35:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:16.508 13:35:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:16.508 13:35:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:16.508 13:35:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:16.508 13:35:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:16.508 13:35:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:16.508 13:35:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1905117 00:07:16.508 13:35:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:16.508 Waiting for target to run... 00:07:16.508 13:35:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1905117 /var/tmp/spdk_tgt.sock 00:07:16.508 13:35:09 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 1905117 ']' 00:07:16.508 13:35:09 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:07:16.508 13:35:09 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:16.508 13:35:09 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:16.508 13:35:09 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:16.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:16.508 13:35:09 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:16.508 13:35:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:16.508 [2024-06-11 13:35:09.309262] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:16.508 [2024-06-11 13:35:09.309325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905117 ] 00:07:16.508 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.079 [2024-06-11 13:35:09.716898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.079 [2024-06-11 13:35:09.768478] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.340 13:35:10 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:17.340 13:35:10 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:07:17.340 13:35:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:17.340 00:07:17.340 13:35:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:17.340 INFO: shutting down applications... 00:07:17.340 13:35:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:17.340 13:35:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:17.340 13:35:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:17.340 13:35:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1905117 ]] 00:07:17.340 13:35:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1905117 00:07:17.340 13:35:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:17.340 13:35:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:17.340 13:35:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1905117 00:07:17.340 13:35:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:17.911 13:35:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:17.911 13:35:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:17.911 13:35:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1905117 00:07:17.911 13:35:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:17.911 13:35:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:17.911 13:35:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:17.911 13:35:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:17.911 SPDK target shutdown done 00:07:17.911 13:35:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:17.911 Success 00:07:17.911 00:07:17.911 real 0m1.442s 00:07:17.911 user 0m0.962s 00:07:17.911 sys 0m0.505s 00:07:17.911 13:35:10 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:17.911 13:35:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:17.911 ************************************ 00:07:17.911 END TEST json_config_extra_key 00:07:17.911 ************************************ 00:07:17.911 13:35:10 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:17.911 13:35:10 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:17.911 13:35:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:17.911 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:07:17.911 ************************************ 00:07:17.911 START TEST alias_rpc 00:07:17.911 ************************************ 00:07:17.911 13:35:10 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:17.911 * Looking for test storage... 00:07:17.911 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:07:17.911 13:35:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:17.911 13:35:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1905406 00:07:17.911 13:35:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1905406 00:07:17.911 13:35:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:17.911 13:35:10 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 1905406 ']' 00:07:17.911 13:35:10 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.911 13:35:10 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:17.911 13:35:10 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.911 13:35:10 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:17.911 13:35:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.172 [2024-06-11 13:35:10.824688] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:18.172 [2024-06-11 13:35:10.824767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905406 ] 00:07:18.172 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.172 [2024-06-11 13:35:10.891755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.172 [2024-06-11 13:35:10.966951] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.743 13:35:11 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:18.743 13:35:11 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:18.743 13:35:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:19.003 13:35:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1905406 00:07:19.003 13:35:11 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 1905406 ']' 00:07:19.003 13:35:11 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 1905406 00:07:19.003 13:35:11 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:07:19.003 13:35:11 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:19.003 13:35:11 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1905406 00:07:19.003 13:35:11 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:19.003 13:35:11 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:19.003 13:35:11 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1905406' 00:07:19.003 killing process with pid 1905406 00:07:19.003 13:35:11 alias_rpc -- common/autotest_common.sh@968 -- # kill 1905406 00:07:19.003 13:35:11 alias_rpc -- common/autotest_common.sh@973 -- # wait 1905406 00:07:19.264 00:07:19.264 real 0m1.392s 00:07:19.264 user 0m1.527s 00:07:19.264 sys 0m0.385s 00:07:19.264 13:35:12 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:19.264 13:35:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.264 ************************************ 00:07:19.264 END TEST alias_rpc 00:07:19.264 ************************************ 00:07:19.264 13:35:12 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:19.264 13:35:12 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:19.264 13:35:12 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:19.264 13:35:12 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:19.264 13:35:12 -- common/autotest_common.sh@10 -- # set +x 00:07:19.264 ************************************ 00:07:19.264 START TEST spdkcli_tcp 00:07:19.264 ************************************ 00:07:19.264 13:35:12 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:19.567 * Looking for test storage... 00:07:19.567 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:07:19.567 13:35:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:07:19.567 13:35:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:19.567 13:35:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:07:19.567 13:35:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:19.567 13:35:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:19.567 13:35:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:19.567 13:35:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:19.567 13:35:12 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:19.567 13:35:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.567 13:35:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1905768 00:07:19.567 13:35:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1905768 00:07:19.567 13:35:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:19.567 13:35:12 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 1905768 ']' 00:07:19.567 13:35:12 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.567 13:35:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:19.567 13:35:12 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.567 13:35:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:19.567 13:35:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.567 [2024-06-11 13:35:12.288702] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:19.567 [2024-06-11 13:35:12.288753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905768 ] 00:07:19.567 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.567 [2024-06-11 13:35:12.349329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.567 [2024-06-11 13:35:12.414054] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.567 [2024-06-11 13:35:12.414087] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.224 13:35:13 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:20.224 13:35:13 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:07:20.224 13:35:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1906100 00:07:20.224 13:35:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:20.224 13:35:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:20.486 [ 00:07:20.486 "bdev_malloc_delete", 00:07:20.486 "bdev_malloc_create", 00:07:20.486 "bdev_null_resize", 00:07:20.486 "bdev_null_delete", 00:07:20.486 "bdev_null_create", 00:07:20.486 "bdev_nvme_cuse_unregister", 00:07:20.486 "bdev_nvme_cuse_register", 00:07:20.486 "bdev_opal_new_user", 00:07:20.486 "bdev_opal_set_lock_state", 00:07:20.486 "bdev_opal_delete", 00:07:20.486 "bdev_opal_get_info", 00:07:20.486 "bdev_opal_create", 00:07:20.486 "bdev_nvme_opal_revert", 00:07:20.486 "bdev_nvme_opal_init", 00:07:20.486 "bdev_nvme_send_cmd", 00:07:20.486 "bdev_nvme_get_path_iostat", 00:07:20.486 "bdev_nvme_get_mdns_discovery_info", 00:07:20.486 "bdev_nvme_stop_mdns_discovery", 00:07:20.486 "bdev_nvme_start_mdns_discovery", 00:07:20.486 "bdev_nvme_set_multipath_policy", 00:07:20.486 "bdev_nvme_set_preferred_path", 00:07:20.486 "bdev_nvme_get_io_paths", 00:07:20.486 "bdev_nvme_remove_error_injection", 00:07:20.486 "bdev_nvme_add_error_injection", 00:07:20.486 "bdev_nvme_get_discovery_info", 00:07:20.486 "bdev_nvme_stop_discovery", 00:07:20.486 "bdev_nvme_start_discovery", 00:07:20.486 "bdev_nvme_get_controller_health_info", 00:07:20.486 "bdev_nvme_disable_controller", 00:07:20.486 "bdev_nvme_enable_controller", 00:07:20.486 "bdev_nvme_reset_controller", 00:07:20.486 "bdev_nvme_get_transport_statistics", 00:07:20.486 "bdev_nvme_apply_firmware", 00:07:20.486 "bdev_nvme_detach_controller", 00:07:20.486 "bdev_nvme_get_controllers", 00:07:20.486 "bdev_nvme_attach_controller", 00:07:20.486 "bdev_nvme_set_hotplug", 00:07:20.486 "bdev_nvme_set_options", 00:07:20.486 "bdev_passthru_delete", 00:07:20.486 "bdev_passthru_create", 00:07:20.486 "bdev_lvol_set_parent_bdev", 00:07:20.486 "bdev_lvol_set_parent", 00:07:20.486 "bdev_lvol_check_shallow_copy", 00:07:20.486 "bdev_lvol_start_shallow_copy", 00:07:20.486 "bdev_lvol_grow_lvstore", 00:07:20.486 "bdev_lvol_get_lvols", 00:07:20.486 "bdev_lvol_get_lvstores", 00:07:20.486 "bdev_lvol_delete", 00:07:20.486 "bdev_lvol_set_read_only", 00:07:20.486 "bdev_lvol_resize", 00:07:20.486 "bdev_lvol_decouple_parent", 00:07:20.486 "bdev_lvol_inflate", 00:07:20.486 "bdev_lvol_rename", 00:07:20.486 "bdev_lvol_clone_bdev", 00:07:20.486 "bdev_lvol_clone", 00:07:20.486 "bdev_lvol_snapshot", 00:07:20.486 "bdev_lvol_create", 00:07:20.486 "bdev_lvol_delete_lvstore", 00:07:20.486 "bdev_lvol_rename_lvstore", 00:07:20.486 "bdev_lvol_create_lvstore", 00:07:20.486 "bdev_raid_set_options", 00:07:20.486 "bdev_raid_remove_base_bdev", 00:07:20.486 "bdev_raid_add_base_bdev", 00:07:20.486 "bdev_raid_delete", 00:07:20.486 "bdev_raid_create", 00:07:20.486 "bdev_raid_get_bdevs", 00:07:20.486 "bdev_error_inject_error", 00:07:20.486 "bdev_error_delete", 00:07:20.486 "bdev_error_create", 00:07:20.486 "bdev_split_delete", 00:07:20.486 "bdev_split_create", 00:07:20.486 "bdev_delay_delete", 00:07:20.486 "bdev_delay_create", 00:07:20.486 "bdev_delay_update_latency", 00:07:20.486 "bdev_zone_block_delete", 00:07:20.486 "bdev_zone_block_create", 00:07:20.486 "blobfs_create", 00:07:20.486 "blobfs_detect", 00:07:20.486 "blobfs_set_cache_size", 00:07:20.486 "bdev_aio_delete", 00:07:20.486 "bdev_aio_rescan", 00:07:20.486 "bdev_aio_create", 00:07:20.487 "bdev_ftl_set_property", 00:07:20.487 "bdev_ftl_get_properties", 00:07:20.487 "bdev_ftl_get_stats", 00:07:20.487 "bdev_ftl_unmap", 00:07:20.487 "bdev_ftl_unload", 00:07:20.487 "bdev_ftl_delete", 00:07:20.487 "bdev_ftl_load", 00:07:20.487 "bdev_ftl_create", 00:07:20.487 "bdev_virtio_attach_controller", 00:07:20.487 "bdev_virtio_scsi_get_devices", 00:07:20.487 "bdev_virtio_detach_controller", 00:07:20.487 "bdev_virtio_blk_set_hotplug", 00:07:20.487 "bdev_iscsi_delete", 00:07:20.487 "bdev_iscsi_create", 00:07:20.487 "bdev_iscsi_set_options", 00:07:20.487 "accel_error_inject_error", 00:07:20.487 "ioat_scan_accel_module", 00:07:20.487 "dsa_scan_accel_module", 00:07:20.487 "iaa_scan_accel_module", 00:07:20.487 "keyring_file_remove_key", 00:07:20.487 "keyring_file_add_key", 00:07:20.487 "keyring_linux_set_options", 00:07:20.487 "iscsi_get_histogram", 00:07:20.487 "iscsi_enable_histogram", 00:07:20.487 "iscsi_set_options", 00:07:20.487 "iscsi_get_auth_groups", 00:07:20.487 "iscsi_auth_group_remove_secret", 00:07:20.487 "iscsi_auth_group_add_secret", 00:07:20.487 "iscsi_delete_auth_group", 00:07:20.487 "iscsi_create_auth_group", 00:07:20.487 "iscsi_set_discovery_auth", 00:07:20.487 "iscsi_get_options", 00:07:20.487 "iscsi_target_node_request_logout", 00:07:20.487 "iscsi_target_node_set_redirect", 00:07:20.487 "iscsi_target_node_set_auth", 00:07:20.487 "iscsi_target_node_add_lun", 00:07:20.487 "iscsi_get_stats", 00:07:20.487 "iscsi_get_connections", 00:07:20.487 "iscsi_portal_group_set_auth", 00:07:20.487 "iscsi_start_portal_group", 00:07:20.487 "iscsi_delete_portal_group", 00:07:20.487 "iscsi_create_portal_group", 00:07:20.487 "iscsi_get_portal_groups", 00:07:20.487 "iscsi_delete_target_node", 00:07:20.487 "iscsi_target_node_remove_pg_ig_maps", 00:07:20.487 "iscsi_target_node_add_pg_ig_maps", 00:07:20.487 "iscsi_create_target_node", 00:07:20.487 "iscsi_get_target_nodes", 00:07:20.487 "iscsi_delete_initiator_group", 00:07:20.487 "iscsi_initiator_group_remove_initiators", 00:07:20.487 "iscsi_initiator_group_add_initiators", 00:07:20.487 "iscsi_create_initiator_group", 00:07:20.487 "iscsi_get_initiator_groups", 00:07:20.487 "nvmf_set_crdt", 00:07:20.487 "nvmf_set_config", 00:07:20.487 "nvmf_set_max_subsystems", 00:07:20.487 "nvmf_stop_mdns_prr", 00:07:20.487 "nvmf_publish_mdns_prr", 00:07:20.487 "nvmf_subsystem_get_listeners", 00:07:20.487 "nvmf_subsystem_get_qpairs", 00:07:20.487 "nvmf_subsystem_get_controllers", 00:07:20.487 "nvmf_get_stats", 00:07:20.487 "nvmf_get_transports", 00:07:20.487 "nvmf_create_transport", 00:07:20.487 "nvmf_get_targets", 00:07:20.487 "nvmf_delete_target", 00:07:20.487 "nvmf_create_target", 00:07:20.487 "nvmf_subsystem_allow_any_host", 00:07:20.487 "nvmf_subsystem_remove_host", 00:07:20.487 "nvmf_subsystem_add_host", 00:07:20.487 "nvmf_ns_remove_host", 00:07:20.487 "nvmf_ns_add_host", 00:07:20.487 "nvmf_subsystem_remove_ns", 00:07:20.487 "nvmf_subsystem_add_ns", 00:07:20.487 "nvmf_subsystem_listener_set_ana_state", 00:07:20.487 "nvmf_discovery_get_referrals", 00:07:20.487 "nvmf_discovery_remove_referral", 00:07:20.487 "nvmf_discovery_add_referral", 00:07:20.487 "nvmf_subsystem_remove_listener", 00:07:20.487 "nvmf_subsystem_add_listener", 00:07:20.487 "nvmf_delete_subsystem", 00:07:20.487 "nvmf_create_subsystem", 00:07:20.487 "nvmf_get_subsystems", 00:07:20.487 "env_dpdk_get_mem_stats", 00:07:20.487 "nbd_get_disks", 00:07:20.487 "nbd_stop_disk", 00:07:20.487 "nbd_start_disk", 00:07:20.487 "ublk_recover_disk", 00:07:20.487 "ublk_get_disks", 00:07:20.487 "ublk_stop_disk", 00:07:20.487 "ublk_start_disk", 00:07:20.487 "ublk_destroy_target", 00:07:20.487 "ublk_create_target", 00:07:20.487 "virtio_blk_create_transport", 00:07:20.487 "virtio_blk_get_transports", 00:07:20.487 "vhost_controller_set_coalescing", 00:07:20.487 "vhost_get_controllers", 00:07:20.487 "vhost_delete_controller", 00:07:20.487 "vhost_create_blk_controller", 00:07:20.487 "vhost_scsi_controller_remove_target", 00:07:20.487 "vhost_scsi_controller_add_target", 00:07:20.487 "vhost_start_scsi_controller", 00:07:20.487 "vhost_create_scsi_controller", 00:07:20.487 "thread_set_cpumask", 00:07:20.487 "framework_get_scheduler", 00:07:20.487 "framework_set_scheduler", 00:07:20.487 "framework_get_reactors", 00:07:20.487 "thread_get_io_channels", 00:07:20.487 "thread_get_pollers", 00:07:20.487 "thread_get_stats", 00:07:20.487 "framework_monitor_context_switch", 00:07:20.487 "spdk_kill_instance", 00:07:20.487 "log_enable_timestamps", 00:07:20.487 "log_get_flags", 00:07:20.487 "log_clear_flag", 00:07:20.487 "log_set_flag", 00:07:20.487 "log_get_level", 00:07:20.487 "log_set_level", 00:07:20.487 "log_get_print_level", 00:07:20.487 "log_set_print_level", 00:07:20.487 "framework_enable_cpumask_locks", 00:07:20.487 "framework_disable_cpumask_locks", 00:07:20.487 "framework_wait_init", 00:07:20.487 "framework_start_init", 00:07:20.487 "scsi_get_devices", 00:07:20.487 "bdev_get_histogram", 00:07:20.487 "bdev_enable_histogram", 00:07:20.487 "bdev_set_qos_limit", 00:07:20.487 "bdev_set_qd_sampling_period", 00:07:20.487 "bdev_get_bdevs", 00:07:20.487 "bdev_reset_iostat", 00:07:20.487 "bdev_get_iostat", 00:07:20.487 "bdev_examine", 00:07:20.487 "bdev_wait_for_examine", 00:07:20.487 "bdev_set_options", 00:07:20.487 "notify_get_notifications", 00:07:20.487 "notify_get_types", 00:07:20.487 "accel_get_stats", 00:07:20.487 "accel_set_options", 00:07:20.487 "accel_set_driver", 00:07:20.487 "accel_crypto_key_destroy", 00:07:20.487 "accel_crypto_keys_get", 00:07:20.487 "accel_crypto_key_create", 00:07:20.487 "accel_assign_opc", 00:07:20.487 "accel_get_module_info", 00:07:20.487 "accel_get_opc_assignments", 00:07:20.487 "vmd_rescan", 00:07:20.487 "vmd_remove_device", 00:07:20.487 "vmd_enable", 00:07:20.487 "sock_get_default_impl", 00:07:20.487 "sock_set_default_impl", 00:07:20.487 "sock_impl_set_options", 00:07:20.487 "sock_impl_get_options", 00:07:20.487 "iobuf_get_stats", 00:07:20.487 "iobuf_set_options", 00:07:20.487 "framework_get_pci_devices", 00:07:20.487 "framework_get_config", 00:07:20.487 "framework_get_subsystems", 00:07:20.487 "trace_get_info", 00:07:20.487 "trace_get_tpoint_group_mask", 00:07:20.487 "trace_disable_tpoint_group", 00:07:20.487 "trace_enable_tpoint_group", 00:07:20.487 "trace_clear_tpoint_mask", 00:07:20.487 "trace_set_tpoint_mask", 00:07:20.487 "keyring_get_keys", 00:07:20.487 "spdk_get_version", 00:07:20.487 "rpc_get_methods" 00:07:20.487 ] 00:07:20.487 13:35:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:20.487 13:35:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:20.487 13:35:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1905768 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 1905768 ']' 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 1905768 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1905768 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1905768' 00:07:20.487 killing process with pid 1905768 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 1905768 00:07:20.487 13:35:13 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 1905768 00:07:20.748 00:07:20.748 real 0m1.386s 00:07:20.748 user 0m2.568s 00:07:20.748 sys 0m0.388s 00:07:20.748 13:35:13 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:20.748 13:35:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:20.748 ************************************ 00:07:20.748 END TEST spdkcli_tcp 00:07:20.748 ************************************ 00:07:20.748 13:35:13 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:20.748 13:35:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:20.748 13:35:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:20.748 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:07:20.748 ************************************ 00:07:20.748 START TEST dpdk_mem_utility 00:07:20.748 ************************************ 00:07:20.748 13:35:13 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:21.010 * Looking for test storage... 00:07:21.010 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:07:21.010 13:35:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:21.010 13:35:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1906180 00:07:21.010 13:35:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1906180 00:07:21.010 13:35:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:07:21.010 13:35:13 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 1906180 ']' 00:07:21.010 13:35:13 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.010 13:35:13 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:21.010 13:35:13 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.010 13:35:13 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:21.010 13:35:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:21.010 [2024-06-11 13:35:13.748701] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:21.010 [2024-06-11 13:35:13.748753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906180 ] 00:07:21.010 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.010 [2024-06-11 13:35:13.812161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.010 [2024-06-11 13:35:13.876759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.272 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:21.272 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:07:21.272 13:35:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:21.272 13:35:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:21.272 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:21.272 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:21.272 { 00:07:21.272 "filename": "/tmp/spdk_mem_dump.txt" 00:07:21.272 } 00:07:21.272 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:21.272 13:35:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:21.272 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:21.272 1 heaps totaling size 814.000000 MiB 00:07:21.272 size: 814.000000 MiB heap id: 0 00:07:21.272 end heaps---------- 00:07:21.272 8 mempools totaling size 598.116089 MiB 00:07:21.272 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:21.272 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:21.272 size: 84.521057 MiB name: bdev_io_1906180 00:07:21.272 size: 51.011292 MiB name: evtpool_1906180 00:07:21.272 size: 50.003479 MiB name: msgpool_1906180 00:07:21.272 size: 21.763794 MiB name: PDU_Pool 00:07:21.272 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:21.272 size: 0.026123 MiB name: Session_Pool 00:07:21.272 end mempools------- 00:07:21.272 6 memzones totaling size 4.142822 MiB 00:07:21.272 size: 1.000366 MiB name: RG_ring_0_1906180 00:07:21.272 size: 1.000366 MiB name: RG_ring_1_1906180 00:07:21.272 size: 1.000366 MiB name: RG_ring_4_1906180 00:07:21.272 size: 1.000366 MiB name: RG_ring_5_1906180 00:07:21.272 size: 0.125366 MiB name: RG_ring_2_1906180 00:07:21.272 size: 0.015991 MiB name: RG_ring_3_1906180 00:07:21.272 end memzones------- 00:07:21.272 13:35:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:21.272 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:21.272 list of free elements. size: 12.519348 MiB 00:07:21.272 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:21.272 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:21.272 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:21.272 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:21.272 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:21.272 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:21.272 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:21.272 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:21.272 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:21.272 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:21.272 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:21.272 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:21.272 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:21.272 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:21.272 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:21.272 list of standard malloc elements. size: 199.218079 MiB 00:07:21.272 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:21.272 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:21.272 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:21.272 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:21.272 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:21.272 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:21.272 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:21.273 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:21.273 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:21.273 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:21.273 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:21.273 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:21.273 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:21.273 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:21.273 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:21.273 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:21.273 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:21.273 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:21.273 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:21.273 list of memzone associated elements. size: 602.262573 MiB 00:07:21.273 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:21.273 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:21.273 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:21.273 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:21.273 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:21.273 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1906180_0 00:07:21.273 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:21.273 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1906180_0 00:07:21.273 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:21.273 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1906180_0 00:07:21.273 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:21.273 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:21.273 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:21.273 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:21.273 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:21.273 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1906180 00:07:21.273 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:21.273 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1906180 00:07:21.273 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:21.273 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1906180 00:07:21.273 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:21.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:21.273 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:21.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:21.273 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:21.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:21.273 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:21.273 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:21.273 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:21.273 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1906180 00:07:21.273 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:21.273 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1906180 00:07:21.273 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:21.273 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1906180 00:07:21.273 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:21.273 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1906180 00:07:21.273 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:21.273 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1906180 00:07:21.273 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:21.273 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:21.273 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:21.273 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:21.273 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:21.273 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:21.273 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:21.273 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1906180 00:07:21.273 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:21.273 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:21.273 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:21.273 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:21.273 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:21.273 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1906180 00:07:21.273 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:21.273 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:21.273 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:21.273 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1906180 00:07:21.273 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:21.273 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1906180 00:07:21.273 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:21.273 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:21.273 13:35:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:21.273 13:35:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1906180 00:07:21.273 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 1906180 ']' 00:07:21.273 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 1906180 00:07:21.274 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:07:21.274 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:21.274 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1906180 00:07:21.535 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:21.535 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:21.535 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1906180' 00:07:21.535 killing process with pid 1906180 00:07:21.535 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 1906180 00:07:21.535 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 1906180 00:07:21.535 00:07:21.535 real 0m0.835s 00:07:21.535 user 0m0.864s 00:07:21.535 sys 0m0.330s 00:07:21.535 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:21.535 13:35:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:21.535 ************************************ 00:07:21.535 END TEST dpdk_mem_utility 00:07:21.535 ************************************ 00:07:21.796 13:35:14 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:21.796 13:35:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:21.796 13:35:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:21.796 13:35:14 -- common/autotest_common.sh@10 -- # set +x 00:07:21.796 ************************************ 00:07:21.796 START TEST event 00:07:21.796 ************************************ 00:07:21.796 13:35:14 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:07:21.796 * Looking for test storage... 00:07:21.796 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:21.796 13:35:14 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:21.796 13:35:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:21.796 13:35:14 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:21.796 13:35:14 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:21.796 13:35:14 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:21.796 13:35:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.796 ************************************ 00:07:21.796 START TEST event_perf 00:07:21.796 ************************************ 00:07:21.796 13:35:14 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:21.796 Running I/O for 1 seconds...[2024-06-11 13:35:14.657192] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:21.796 [2024-06-11 13:35:14.657296] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906500 ] 00:07:21.796 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.057 [2024-06-11 13:35:14.725937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.057 [2024-06-11 13:35:14.805173] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.057 [2024-06-11 13:35:14.805290] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.057 [2024-06-11 13:35:14.805448] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.057 Running I/O for 1 seconds...[2024-06-11 13:35:14.805449] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.998 00:07:22.998 lcore 0: 177791 00:07:22.998 lcore 1: 177790 00:07:22.998 lcore 2: 177792 00:07:22.998 lcore 3: 177793 00:07:22.998 done. 00:07:22.998 00:07:22.998 real 0m1.223s 00:07:22.998 user 0m4.142s 00:07:22.998 sys 0m0.081s 00:07:22.998 13:35:15 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:22.998 13:35:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.998 ************************************ 00:07:22.998 END TEST event_perf 00:07:22.998 ************************************ 00:07:22.998 13:35:15 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:22.998 13:35:15 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:22.998 13:35:15 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.998 13:35:15 event -- common/autotest_common.sh@10 -- # set +x 00:07:23.259 ************************************ 00:07:23.259 START TEST event_reactor 00:07:23.259 ************************************ 00:07:23.259 13:35:15 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:23.259 [2024-06-11 13:35:15.952247] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:23.259 [2024-06-11 13:35:15.952337] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906671 ] 00:07:23.259 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.259 [2024-06-11 13:35:16.018168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.259 [2024-06-11 13:35:16.084621] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.643 test_start 00:07:24.643 oneshot 00:07:24.643 tick 100 00:07:24.643 tick 100 00:07:24.643 tick 250 00:07:24.643 tick 100 00:07:24.643 tick 100 00:07:24.643 tick 250 00:07:24.643 tick 100 00:07:24.643 tick 500 00:07:24.643 tick 100 00:07:24.643 tick 100 00:07:24.643 tick 250 00:07:24.643 tick 100 00:07:24.643 tick 100 00:07:24.643 test_end 00:07:24.643 00:07:24.643 real 0m1.206s 00:07:24.643 user 0m1.130s 00:07:24.643 sys 0m0.072s 00:07:24.643 13:35:17 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:24.643 13:35:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:24.643 ************************************ 00:07:24.643 END TEST event_reactor 00:07:24.643 ************************************ 00:07:24.643 13:35:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:24.643 13:35:17 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:24.643 13:35:17 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:24.643 13:35:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:24.643 ************************************ 00:07:24.643 START TEST event_reactor_perf 00:07:24.643 ************************************ 00:07:24.644 13:35:17 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:24.644 [2024-06-11 13:35:17.231854] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:24.644 [2024-06-11 13:35:17.231951] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906963 ] 00:07:24.644 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.644 [2024-06-11 13:35:17.295455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.644 [2024-06-11 13:35:17.360440] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.587 test_start 00:07:25.587 test_end 00:07:25.587 Performance: 370277 events per second 00:07:25.587 00:07:25.587 real 0m1.201s 00:07:25.587 user 0m1.127s 00:07:25.587 sys 0m0.070s 00:07:25.587 13:35:18 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:25.587 13:35:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.587 ************************************ 00:07:25.587 END TEST event_reactor_perf 00:07:25.587 ************************************ 00:07:25.587 13:35:18 event -- event/event.sh@49 -- # uname -s 00:07:25.587 13:35:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:25.587 13:35:18 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:25.587 13:35:18 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:25.587 13:35:18 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:25.587 13:35:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.587 ************************************ 00:07:25.587 START TEST event_scheduler 00:07:25.587 ************************************ 00:07:25.587 13:35:18 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:25.847 * Looking for test storage... 00:07:25.847 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:07:25.847 13:35:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:25.847 13:35:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1907347 00:07:25.847 13:35:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:25.847 13:35:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:25.847 13:35:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1907347 00:07:25.847 13:35:18 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 1907347 ']' 00:07:25.847 13:35:18 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.847 13:35:18 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:25.847 13:35:18 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.847 13:35:18 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:25.847 13:35:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:25.847 [2024-06-11 13:35:18.637665] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:25.847 [2024-06-11 13:35:18.637725] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907347 ] 00:07:25.847 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.847 [2024-06-11 13:35:18.693400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.108 [2024-06-11 13:35:18.759517] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.108 [2024-06-11 13:35:18.759678] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.108 [2024-06-11 13:35:18.759833] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.108 [2024-06-11 13:35:18.759835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.680 13:35:19 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:26.680 13:35:19 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:07:26.680 13:35:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:26.680 13:35:19 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.680 13:35:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:26.680 POWER: Env isn't set yet! 00:07:26.680 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:26.680 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:26.680 POWER: Cannot set governor of lcore 0 to userspace 00:07:26.680 POWER: Attempting to initialise PSTAT power management... 00:07:26.680 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:07:26.680 POWER: Initialized successfully for lcore 0 power management 00:07:26.680 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:07:26.680 POWER: Initialized successfully for lcore 1 power management 00:07:26.680 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:07:26.680 POWER: Initialized successfully for lcore 2 power management 00:07:26.680 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:07:26.680 POWER: Initialized successfully for lcore 3 power management 00:07:26.680 [2024-06-11 13:35:19.484459] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:26.680 [2024-06-11 13:35:19.484471] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:26.680 [2024-06-11 13:35:19.484476] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:26.680 13:35:19 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:26.680 13:35:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:26.680 13:35:19 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.680 13:35:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:26.680 [2024-06-11 13:35:19.541765] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:26.680 13:35:19 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:26.680 13:35:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:26.680 13:35:19 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:26.680 13:35:19 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:26.680 13:35:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:26.680 ************************************ 00:07:26.680 START TEST scheduler_create_thread 00:07:26.680 ************************************ 00:07:26.680 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:07:26.680 13:35:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:26.680 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.680 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.680 2 00:07:26.680 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:26.680 13:35:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:26.680 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.680 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.942 3 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.942 4 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.942 5 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.942 6 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.942 7 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.942 8 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.942 9 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:26.942 13:35:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.330 10 00:07:28.330 13:35:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.330 13:35:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:28.330 13:35:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.330 13:35:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.900 13:35:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.900 13:35:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:28.900 13:35:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:28.900 13:35:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.900 13:35:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.838 13:35:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:29.838 13:35:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:29.838 13:35:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:29.838 13:35:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.410 13:35:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:30.410 13:35:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:30.410 13:35:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:30.410 13:35:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:30.410 13:35:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.981 13:35:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:30.981 00:07:30.981 real 0m4.213s 00:07:30.981 user 0m0.024s 00:07:30.981 sys 0m0.007s 00:07:30.981 13:35:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:30.981 13:35:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.981 ************************************ 00:07:30.981 END TEST scheduler_create_thread 00:07:30.981 ************************************ 00:07:30.981 13:35:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:30.981 13:35:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1907347 00:07:30.981 13:35:23 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 1907347 ']' 00:07:30.981 13:35:23 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 1907347 00:07:30.981 13:35:23 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:07:30.981 13:35:23 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:30.981 13:35:23 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1907347 00:07:30.981 13:35:23 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:07:30.981 13:35:23 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:07:30.981 13:35:23 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1907347' 00:07:30.981 killing process with pid 1907347 00:07:30.981 13:35:23 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 1907347 00:07:30.981 13:35:23 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 1907347 00:07:31.240 [2024-06-11 13:35:24.069984] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:31.501 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:07:31.501 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:07:31.501 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:07:31.501 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:07:31.501 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:07:31.501 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:07:31.501 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:07:31.501 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:07:31.501 00:07:31.501 real 0m5.761s 00:07:31.501 user 0m13.418s 00:07:31.501 sys 0m0.354s 00:07:31.501 13:35:24 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:31.501 13:35:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:31.501 ************************************ 00:07:31.501 END TEST event_scheduler 00:07:31.501 ************************************ 00:07:31.501 13:35:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:31.501 13:35:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:31.501 13:35:24 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:31.501 13:35:24 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:31.501 13:35:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:31.501 ************************************ 00:07:31.501 START TEST app_repeat 00:07:31.501 ************************************ 00:07:31.501 13:35:24 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1908431 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1908431' 00:07:31.501 Process app_repeat pid: 1908431 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:31.501 spdk_app_start Round 0 00:07:31.501 13:35:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1908431 /var/tmp/spdk-nbd.sock 00:07:31.501 13:35:24 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1908431 ']' 00:07:31.501 13:35:24 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:31.501 13:35:24 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:31.501 13:35:24 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:31.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:31.501 13:35:24 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:31.501 13:35:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:31.501 [2024-06-11 13:35:24.371705] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:31.502 [2024-06-11 13:35:24.371816] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908431 ] 00:07:31.502 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.762 [2024-06-11 13:35:24.443410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:31.762 [2024-06-11 13:35:24.516570] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.762 [2024-06-11 13:35:24.516573] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.333 13:35:25 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:32.333 13:35:25 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:32.333 13:35:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:32.593 Malloc0 00:07:32.593 13:35:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:32.593 Malloc1 00:07:32.853 13:35:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.853 13:35:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.853 13:35:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.853 13:35:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:32.853 13:35:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:32.854 /dev/nbd0 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:32.854 1+0 records in 00:07:32.854 1+0 records out 00:07:32.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277723 s, 14.7 MB/s 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:32.854 13:35:25 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:32.854 13:35:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:33.114 /dev/nbd1 00:07:33.114 13:35:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:33.114 13:35:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:33.114 1+0 records in 00:07:33.114 1+0 records out 00:07:33.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238755 s, 17.2 MB/s 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:33.114 13:35:25 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:33.114 13:35:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.114 13:35:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:33.114 13:35:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.114 13:35:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.114 13:35:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:33.375 { 00:07:33.375 "nbd_device": "/dev/nbd0", 00:07:33.375 "bdev_name": "Malloc0" 00:07:33.375 }, 00:07:33.375 { 00:07:33.375 "nbd_device": "/dev/nbd1", 00:07:33.375 "bdev_name": "Malloc1" 00:07:33.375 } 00:07:33.375 ]' 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:33.375 { 00:07:33.375 "nbd_device": "/dev/nbd0", 00:07:33.375 "bdev_name": "Malloc0" 00:07:33.375 }, 00:07:33.375 { 00:07:33.375 "nbd_device": "/dev/nbd1", 00:07:33.375 "bdev_name": "Malloc1" 00:07:33.375 } 00:07:33.375 ]' 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:33.375 /dev/nbd1' 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:33.375 /dev/nbd1' 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:33.375 256+0 records in 00:07:33.375 256+0 records out 00:07:33.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125196 s, 83.8 MB/s 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:33.375 256+0 records in 00:07:33.375 256+0 records out 00:07:33.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157806 s, 66.4 MB/s 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:33.375 256+0 records in 00:07:33.375 256+0 records out 00:07:33.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315927 s, 33.2 MB/s 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.375 13:35:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:33.635 13:35:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:33.635 13:35:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.636 13:35:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:33.896 13:35:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:33.896 13:35:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:34.156 13:35:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:34.156 [2024-06-11 13:35:27.012959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:34.416 [2024-06-11 13:35:27.076571] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.416 [2024-06-11 13:35:27.076573] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.416 [2024-06-11 13:35:27.107927] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:34.416 [2024-06-11 13:35:27.107962] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:37.712 13:35:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:37.712 13:35:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:37.712 spdk_app_start Round 1 00:07:37.712 13:35:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1908431 /var/tmp/spdk-nbd.sock 00:07:37.712 13:35:29 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1908431 ']' 00:07:37.712 13:35:29 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:37.712 13:35:29 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:37.712 13:35:29 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:37.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:37.712 13:35:29 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:37.712 13:35:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:37.712 13:35:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:37.712 Malloc0 00:07:37.712 13:35:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:37.712 Malloc1 00:07:37.712 13:35:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:37.712 /dev/nbd0 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:37.712 1+0 records in 00:07:37.712 1+0 records out 00:07:37.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278376 s, 14.7 MB/s 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:37.712 13:35:30 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.712 13:35:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:37.972 /dev/nbd1 00:07:37.972 13:35:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:37.972 13:35:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:37.972 1+0 records in 00:07:37.972 1+0 records out 00:07:37.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272968 s, 15.0 MB/s 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:37.972 13:35:30 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:37.972 13:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.972 13:35:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.972 13:35:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:37.972 13:35:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.972 13:35:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:38.232 { 00:07:38.232 "nbd_device": "/dev/nbd0", 00:07:38.232 "bdev_name": "Malloc0" 00:07:38.232 }, 00:07:38.232 { 00:07:38.232 "nbd_device": "/dev/nbd1", 00:07:38.232 "bdev_name": "Malloc1" 00:07:38.232 } 00:07:38.232 ]' 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:38.232 { 00:07:38.232 "nbd_device": "/dev/nbd0", 00:07:38.232 "bdev_name": "Malloc0" 00:07:38.232 }, 00:07:38.232 { 00:07:38.232 "nbd_device": "/dev/nbd1", 00:07:38.232 "bdev_name": "Malloc1" 00:07:38.232 } 00:07:38.232 ]' 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:38.232 /dev/nbd1' 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:38.232 /dev/nbd1' 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:38.232 256+0 records in 00:07:38.232 256+0 records out 00:07:38.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114704 s, 91.4 MB/s 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:38.232 256+0 records in 00:07:38.232 256+0 records out 00:07:38.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309295 s, 33.9 MB/s 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:38.232 13:35:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:38.232 256+0 records in 00:07:38.232 256+0 records out 00:07:38.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169598 s, 61.8 MB/s 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:38.232 13:35:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:38.233 13:35:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.233 13:35:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.494 13:35:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:38.755 13:35:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:38.755 13:35:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:39.015 13:35:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:39.015 [2024-06-11 13:35:31.851563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.015 [2024-06-11 13:35:31.915777] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.015 [2024-06-11 13:35:31.915779] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.276 [2024-06-11 13:35:31.948031] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:39.276 [2024-06-11 13:35:31.948067] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:41.818 13:35:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:41.818 13:35:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:41.818 spdk_app_start Round 2 00:07:41.818 13:35:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1908431 /var/tmp/spdk-nbd.sock 00:07:41.818 13:35:34 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1908431 ']' 00:07:41.818 13:35:34 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:41.818 13:35:34 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:41.818 13:35:34 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:41.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:41.818 13:35:34 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:41.818 13:35:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:42.078 13:35:34 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:42.078 13:35:34 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:42.078 13:35:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:42.338 Malloc0 00:07:42.338 13:35:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:42.338 Malloc1 00:07:42.338 13:35:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.338 13:35:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:42.598 /dev/nbd0 00:07:42.598 13:35:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:42.598 13:35:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:42.598 1+0 records in 00:07:42.598 1+0 records out 00:07:42.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271546 s, 15.1 MB/s 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:42.598 13:35:35 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:42.598 13:35:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.598 13:35:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.598 13:35:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:42.857 /dev/nbd1 00:07:42.857 13:35:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:42.857 13:35:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:42.857 1+0 records in 00:07:42.857 1+0 records out 00:07:42.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386468 s, 10.6 MB/s 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:07:42.857 13:35:35 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:07:42.857 13:35:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.857 13:35:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:42.857 13:35:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.857 13:35:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.857 13:35:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.857 13:35:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:42.857 { 00:07:42.857 "nbd_device": "/dev/nbd0", 00:07:42.857 "bdev_name": "Malloc0" 00:07:42.857 }, 00:07:42.857 { 00:07:42.857 "nbd_device": "/dev/nbd1", 00:07:42.857 "bdev_name": "Malloc1" 00:07:42.857 } 00:07:42.857 ]' 00:07:42.857 13:35:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:42.857 { 00:07:42.857 "nbd_device": "/dev/nbd0", 00:07:42.857 "bdev_name": "Malloc0" 00:07:42.857 }, 00:07:42.857 { 00:07:42.857 "nbd_device": "/dev/nbd1", 00:07:42.857 "bdev_name": "Malloc1" 00:07:42.857 } 00:07:42.857 ]' 00:07:42.857 13:35:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:43.117 /dev/nbd1' 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:43.117 /dev/nbd1' 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:43.117 256+0 records in 00:07:43.117 256+0 records out 00:07:43.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119564 s, 87.7 MB/s 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:43.117 256+0 records in 00:07:43.117 256+0 records out 00:07:43.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158518 s, 66.1 MB/s 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:43.117 256+0 records in 00:07:43.117 256+0 records out 00:07:43.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202459 s, 51.8 MB/s 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.117 13:35:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:43.117 13:35:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:43.377 13:35:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:43.377 13:35:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:43.377 13:35:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.377 13:35:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.377 13:35:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:43.377 13:35:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:43.377 13:35:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.378 13:35:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:43.638 13:35:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:43.638 13:35:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:43.898 13:35:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:43.898 [2024-06-11 13:35:36.719125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:43.898 [2024-06-11 13:35:36.782707] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.898 [2024-06-11 13:35:36.782708] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.157 [2024-06-11 13:35:36.814079] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:44.157 [2024-06-11 13:35:36.814114] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:46.783 13:35:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1908431 /var/tmp/spdk-nbd.sock 00:07:46.783 13:35:39 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1908431 ']' 00:07:46.783 13:35:39 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:46.783 13:35:39 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:46.783 13:35:39 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:46.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:46.783 13:35:39 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:46.783 13:35:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:07:47.043 13:35:39 event.app_repeat -- event/event.sh@39 -- # killprocess 1908431 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 1908431 ']' 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 1908431 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1908431 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1908431' 00:07:47.043 killing process with pid 1908431 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@968 -- # kill 1908431 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@973 -- # wait 1908431 00:07:47.043 spdk_app_start is called in Round 0. 00:07:47.043 Shutdown signal received, stop current app iteration 00:07:47.043 Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 reinitialization... 00:07:47.043 spdk_app_start is called in Round 1. 00:07:47.043 Shutdown signal received, stop current app iteration 00:07:47.043 Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 reinitialization... 00:07:47.043 spdk_app_start is called in Round 2. 00:07:47.043 Shutdown signal received, stop current app iteration 00:07:47.043 Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 reinitialization... 00:07:47.043 spdk_app_start is called in Round 3. 00:07:47.043 Shutdown signal received, stop current app iteration 00:07:47.043 13:35:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:47.043 13:35:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:47.043 00:07:47.043 real 0m15.580s 00:07:47.043 user 0m33.652s 00:07:47.043 sys 0m2.104s 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:47.043 13:35:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:47.043 ************************************ 00:07:47.043 END TEST app_repeat 00:07:47.043 ************************************ 00:07:47.043 13:35:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:47.043 13:35:39 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:47.043 13:35:39 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:47.043 13:35:39 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:47.043 13:35:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:47.304 ************************************ 00:07:47.304 START TEST cpu_locks 00:07:47.304 ************************************ 00:07:47.304 13:35:39 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:47.304 * Looking for test storage... 00:07:47.304 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:47.304 13:35:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:47.304 13:35:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:47.304 13:35:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:47.304 13:35:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:47.304 13:35:40 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:47.304 13:35:40 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:47.304 13:35:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.304 ************************************ 00:07:47.304 START TEST default_locks 00:07:47.304 ************************************ 00:07:47.304 13:35:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:07:47.304 13:35:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1911987 00:07:47.304 13:35:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1911987 00:07:47.304 13:35:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:47.304 13:35:40 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1911987 ']' 00:07:47.304 13:35:40 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.304 13:35:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:47.304 13:35:40 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.304 13:35:40 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:47.304 13:35:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.304 [2024-06-11 13:35:40.174651] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:47.304 [2024-06-11 13:35:40.174715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911987 ] 00:07:47.304 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.567 [2024-06-11 13:35:40.237928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.567 [2024-06-11 13:35:40.309047] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.137 13:35:40 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:48.137 13:35:40 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:07:48.137 13:35:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1911987 00:07:48.137 13:35:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1911987 00:07:48.137 13:35:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:48.397 lslocks: write error 00:07:48.397 13:35:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1911987 00:07:48.397 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 1911987 ']' 00:07:48.397 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 1911987 00:07:48.397 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:07:48.397 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:48.397 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1911987 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1911987' 00:07:48.658 killing process with pid 1911987 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 1911987 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 1911987 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1911987 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1911987 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 1911987 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1911987 ']' 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.658 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1911987) - No such process 00:07:48.658 ERROR: process (pid: 1911987) is no longer running 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:48.658 00:07:48.658 real 0m1.410s 00:07:48.658 user 0m1.478s 00:07:48.658 sys 0m0.466s 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:48.658 13:35:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.658 ************************************ 00:07:48.658 END TEST default_locks 00:07:48.658 ************************************ 00:07:48.658 13:35:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:48.658 13:35:41 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:48.658 13:35:41 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:48.658 13:35:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.919 ************************************ 00:07:48.919 START TEST default_locks_via_rpc 00:07:48.919 ************************************ 00:07:48.919 13:35:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:07:48.919 13:35:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1912302 00:07:48.919 13:35:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1912302 00:07:48.919 13:35:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:48.919 13:35:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1912302 ']' 00:07:48.919 13:35:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.919 13:35:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:48.919 13:35:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.919 13:35:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:48.919 13:35:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.919 [2024-06-11 13:35:41.657687] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:48.919 [2024-06-11 13:35:41.657743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912302 ] 00:07:48.919 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.919 [2024-06-11 13:35:41.721789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.919 [2024-06-11 13:35:41.795550] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1912302 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1912302 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1912302 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 1912302 ']' 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 1912302 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:49.860 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1912302 00:07:50.120 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:50.120 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:50.120 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1912302' 00:07:50.120 killing process with pid 1912302 00:07:50.120 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 1912302 00:07:50.120 13:35:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 1912302 00:07:50.120 00:07:50.120 real 0m1.408s 00:07:50.121 user 0m1.500s 00:07:50.121 sys 0m0.464s 00:07:50.121 13:35:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:50.121 13:35:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.121 ************************************ 00:07:50.121 END TEST default_locks_via_rpc 00:07:50.121 ************************************ 00:07:50.380 13:35:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:50.380 13:35:43 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:50.380 13:35:43 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:50.380 13:35:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.380 ************************************ 00:07:50.380 START TEST non_locking_app_on_locked_coremask 00:07:50.380 ************************************ 00:07:50.380 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:07:50.380 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1912568 00:07:50.380 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1912568 /var/tmp/spdk.sock 00:07:50.380 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:50.380 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1912568 ']' 00:07:50.380 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.380 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:50.380 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.380 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:50.380 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.380 [2024-06-11 13:35:43.137000] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:50.380 [2024-06-11 13:35:43.137059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912568 ] 00:07:50.380 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.380 [2024-06-11 13:35:43.199545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.380 [2024-06-11 13:35:43.267126] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.319 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:51.319 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:51.319 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1912724 00:07:51.319 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1912724 /var/tmp/spdk2.sock 00:07:51.319 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:51.319 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1912724 ']' 00:07:51.319 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.319 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:51.319 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.319 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:51.319 13:35:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.319 [2024-06-11 13:35:43.941803] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:51.319 [2024-06-11 13:35:43.941855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912724 ] 00:07:51.319 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.319 [2024-06-11 13:35:44.030632] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:51.319 [2024-06-11 13:35:44.030658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.319 [2024-06-11 13:35:44.164754] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.891 13:35:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:51.891 13:35:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:51.891 13:35:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1912568 00:07:51.891 13:35:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1912568 00:07:51.891 13:35:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:52.460 lslocks: write error 00:07:52.460 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1912568 00:07:52.460 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1912568 ']' 00:07:52.460 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1912568 00:07:52.460 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:52.460 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:52.460 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1912568 00:07:52.460 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:52.460 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:52.460 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1912568' 00:07:52.460 killing process with pid 1912568 00:07:52.460 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1912568 00:07:52.460 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1912568 00:07:53.029 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1912724 00:07:53.029 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1912724 ']' 00:07:53.029 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1912724 00:07:53.029 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:53.029 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:53.029 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1912724 00:07:53.029 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:53.029 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:53.029 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1912724' 00:07:53.029 killing process with pid 1912724 00:07:53.029 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1912724 00:07:53.029 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1912724 00:07:53.289 00:07:53.289 real 0m2.896s 00:07:53.289 user 0m3.158s 00:07:53.289 sys 0m0.882s 00:07:53.289 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.289 13:35:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.289 ************************************ 00:07:53.289 END TEST non_locking_app_on_locked_coremask 00:07:53.289 ************************************ 00:07:53.289 13:35:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:53.289 13:35:46 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:53.289 13:35:46 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.289 13:35:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.289 ************************************ 00:07:53.289 START TEST locking_app_on_unlocked_coremask 00:07:53.289 ************************************ 00:07:53.289 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:07:53.289 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1913107 00:07:53.289 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1913107 /var/tmp/spdk.sock 00:07:53.289 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:53.289 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1913107 ']' 00:07:53.289 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.289 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:53.289 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.289 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:53.289 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.289 [2024-06-11 13:35:46.105707] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:53.289 [2024-06-11 13:35:46.105757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913107 ] 00:07:53.289 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.289 [2024-06-11 13:35:46.167938] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:53.289 [2024-06-11 13:35:46.167970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.549 [2024-06-11 13:35:46.234979] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.121 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:54.121 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:54.121 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1913434 00:07:54.121 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1913434 /var/tmp/spdk2.sock 00:07:54.121 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1913434 ']' 00:07:54.121 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:54.121 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:54.121 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:54.121 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:54.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:54.121 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:54.121 13:35:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.121 [2024-06-11 13:35:46.929359] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:54.121 [2024-06-11 13:35:46.929413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913434 ] 00:07:54.121 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.121 [2024-06-11 13:35:47.023043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.382 [2024-06-11 13:35:47.152779] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.952 13:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:54.952 13:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:54.952 13:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1913434 00:07:54.952 13:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:54.952 13:35:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1913434 00:07:55.524 lslocks: write error 00:07:55.524 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1913107 00:07:55.524 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1913107 ']' 00:07:55.524 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1913107 00:07:55.524 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:55.524 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:55.524 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1913107 00:07:55.524 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:55.524 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:55.524 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1913107' 00:07:55.524 killing process with pid 1913107 00:07:55.524 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1913107 00:07:55.524 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1913107 00:07:56.095 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1913434 00:07:56.095 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1913434 ']' 00:07:56.095 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1913434 00:07:56.095 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:56.095 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:56.095 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1913434 00:07:56.095 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:56.095 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:56.095 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1913434' 00:07:56.095 killing process with pid 1913434 00:07:56.095 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1913434 00:07:56.095 13:35:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1913434 00:07:56.355 00:07:56.355 real 0m2.966s 00:07:56.355 user 0m3.218s 00:07:56.355 sys 0m0.895s 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.355 ************************************ 00:07:56.355 END TEST locking_app_on_unlocked_coremask 00:07:56.355 ************************************ 00:07:56.355 13:35:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:56.355 13:35:49 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:56.355 13:35:49 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:56.355 13:35:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.355 ************************************ 00:07:56.355 START TEST locking_app_on_locked_coremask 00:07:56.355 ************************************ 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1913813 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1913813 /var/tmp/spdk.sock 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1913813 ']' 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:56.355 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:56.355 [2024-06-11 13:35:49.154826] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:56.356 [2024-06-11 13:35:49.154888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913813 ] 00:07:56.356 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.356 [2024-06-11 13:35:49.215417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.616 [2024-06-11 13:35:49.278287] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1914049 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1914049 /var/tmp/spdk2.sock 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1914049 /var/tmp/spdk2.sock 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1914049 /var/tmp/spdk2.sock 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1914049 ']' 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:57.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:57.186 13:35:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.186 [2024-06-11 13:35:49.969595] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:57.186 [2024-06-11 13:35:49.969646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914049 ] 00:07:57.186 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.186 [2024-06-11 13:35:50.063488] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1913813 has claimed it. 00:07:57.186 [2024-06-11 13:35:50.063535] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:57.757 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1914049) - No such process 00:07:57.757 ERROR: process (pid: 1914049) is no longer running 00:07:57.757 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:57.757 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:07:57.757 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:57.757 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:57.757 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:57.757 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:57.757 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1913813 00:07:57.757 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1913813 00:07:57.757 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:58.327 lslocks: write error 00:07:58.327 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1913813 00:07:58.327 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1913813 ']' 00:07:58.327 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1913813 00:07:58.327 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:07:58.327 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:58.327 13:35:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1913813 00:07:58.327 13:35:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:58.327 13:35:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:58.327 13:35:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1913813' 00:07:58.327 killing process with pid 1913813 00:07:58.327 13:35:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1913813 00:07:58.327 13:35:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1913813 00:07:58.327 00:07:58.327 real 0m2.140s 00:07:58.327 user 0m2.392s 00:07:58.327 sys 0m0.580s 00:07:58.327 13:35:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:58.327 13:35:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.327 ************************************ 00:07:58.327 END TEST locking_app_on_locked_coremask 00:07:58.327 ************************************ 00:07:58.587 13:35:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:58.587 13:35:51 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:58.587 13:35:51 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:58.587 13:35:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.587 ************************************ 00:07:58.587 START TEST locking_overlapped_coremask 00:07:58.587 ************************************ 00:07:58.587 13:35:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:07:58.587 13:35:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1914260 00:07:58.587 13:35:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1914260 /var/tmp/spdk.sock 00:07:58.587 13:35:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:58.587 13:35:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1914260 ']' 00:07:58.587 13:35:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.587 13:35:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:58.587 13:35:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.587 13:35:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:58.587 13:35:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.587 [2024-06-11 13:35:51.370504] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:58.587 [2024-06-11 13:35:51.370564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914260 ] 00:07:58.587 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.587 [2024-06-11 13:35:51.435052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.848 [2024-06-11 13:35:51.507478] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.848 [2024-06-11 13:35:51.507597] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.848 [2024-06-11 13:35:51.507600] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1914527 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1914527 /var/tmp/spdk2.sock 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1914527 /var/tmp/spdk2.sock 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1914527 /var/tmp/spdk2.sock 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1914527 ']' 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:59.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:59.418 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.418 [2024-06-11 13:35:52.189007] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:59.418 [2024-06-11 13:35:52.189073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914527 ] 00:07:59.418 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.418 [2024-06-11 13:35:52.265029] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1914260 has claimed it. 00:07:59.418 [2024-06-11 13:35:52.265064] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:59.988 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1914527) - No such process 00:07:59.988 ERROR: process (pid: 1914527) is no longer running 00:07:59.988 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:59.988 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:07:59.988 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:59.988 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:59.988 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1914260 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 1914260 ']' 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 1914260 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1914260 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1914260' 00:07:59.989 killing process with pid 1914260 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 1914260 00:07:59.989 13:35:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 1914260 00:08:00.249 00:08:00.249 real 0m1.757s 00:08:00.249 user 0m4.926s 00:08:00.249 sys 0m0.382s 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.249 ************************************ 00:08:00.249 END TEST locking_overlapped_coremask 00:08:00.249 ************************************ 00:08:00.249 13:35:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:00.249 13:35:53 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:00.249 13:35:53 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:00.249 13:35:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.249 ************************************ 00:08:00.249 START TEST locking_overlapped_coremask_via_rpc 00:08:00.249 ************************************ 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1914715 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1914715 /var/tmp/spdk.sock 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1914715 ']' 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.249 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:00.509 [2024-06-11 13:35:53.188665] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:00.509 [2024-06-11 13:35:53.188712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914715 ] 00:08:00.509 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.510 [2024-06-11 13:35:53.249404] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:00.510 [2024-06-11 13:35:53.249432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.510 [2024-06-11 13:35:53.317047] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.510 [2024-06-11 13:35:53.317247] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.510 [2024-06-11 13:35:53.317250] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.081 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:01.081 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:01.081 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1914895 00:08:01.081 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1914895 /var/tmp/spdk2.sock 00:08:01.081 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1914895 ']' 00:08:01.081 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:01.081 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.081 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:01.081 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.081 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:01.081 13:35:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.081 [2024-06-11 13:35:53.992810] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:01.081 [2024-06-11 13:35:53.992862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914895 ] 00:08:01.342 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.342 [2024-06-11 13:35:54.062969] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:01.342 [2024-06-11 13:35:54.062991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.342 [2024-06-11 13:35:54.175441] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.342 [2024-06-11 13:35:54.175596] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.342 [2024-06-11 13:35:54.175599] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.914 [2024-06-11 13:35:54.759078] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1914715 has claimed it. 00:08:01.914 request: 00:08:01.914 { 00:08:01.914 "method": "framework_enable_cpumask_locks", 00:08:01.914 "req_id": 1 00:08:01.914 } 00:08:01.914 Got JSON-RPC error response 00:08:01.914 response: 00:08:01.914 { 00:08:01.914 "code": -32603, 00:08:01.914 "message": "Failed to claim CPU core: 2" 00:08:01.914 } 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1914715 /var/tmp/spdk.sock 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1914715 ']' 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:01.914 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.174 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:02.174 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:02.174 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1914895 /var/tmp/spdk2.sock 00:08:02.174 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1914895 ']' 00:08:02.174 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:02.174 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:02.175 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:02.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:02.175 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:02.175 13:35:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.435 13:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:02.435 13:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:02.435 13:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:02.436 13:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:02.436 13:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:02.436 13:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:02.436 00:08:02.436 real 0m1.966s 00:08:02.436 user 0m0.741s 00:08:02.436 sys 0m0.155s 00:08:02.436 13:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:02.436 13:35:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.436 ************************************ 00:08:02.436 END TEST locking_overlapped_coremask_via_rpc 00:08:02.436 ************************************ 00:08:02.436 13:35:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:02.436 13:35:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1914715 ]] 00:08:02.436 13:35:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1914715 00:08:02.436 13:35:55 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1914715 ']' 00:08:02.436 13:35:55 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1914715 00:08:02.436 13:35:55 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:08:02.436 13:35:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:02.436 13:35:55 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1914715 00:08:02.436 13:35:55 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:02.436 13:35:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:02.436 13:35:55 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1914715' 00:08:02.436 killing process with pid 1914715 00:08:02.436 13:35:55 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1914715 00:08:02.436 13:35:55 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1914715 00:08:02.697 13:35:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1914895 ]] 00:08:02.697 13:35:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1914895 00:08:02.697 13:35:55 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1914895 ']' 00:08:02.697 13:35:55 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1914895 00:08:02.697 13:35:55 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:08:02.697 13:35:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:02.697 13:35:55 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1914895 00:08:02.697 13:35:55 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:08:02.697 13:35:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:08:02.697 13:35:55 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1914895' 00:08:02.697 killing process with pid 1914895 00:08:02.697 13:35:55 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1914895 00:08:02.697 13:35:55 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1914895 00:08:02.958 13:35:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:02.958 13:35:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:02.958 13:35:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1914715 ]] 00:08:02.958 13:35:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1914715 00:08:02.958 13:35:55 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1914715 ']' 00:08:02.958 13:35:55 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1914715 00:08:02.958 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1914715) - No such process 00:08:02.958 13:35:55 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1914715 is not found' 00:08:02.958 Process with pid 1914715 is not found 00:08:02.958 13:35:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1914895 ]] 00:08:02.958 13:35:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1914895 00:08:02.958 13:35:55 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1914895 ']' 00:08:02.958 13:35:55 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1914895 00:08:02.958 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1914895) - No such process 00:08:02.958 13:35:55 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1914895 is not found' 00:08:02.958 Process with pid 1914895 is not found 00:08:02.958 13:35:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:02.958 00:08:02.958 real 0m15.670s 00:08:02.958 user 0m26.825s 00:08:02.958 sys 0m4.681s 00:08:02.958 13:35:55 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:02.958 13:35:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.958 ************************************ 00:08:02.958 END TEST cpu_locks 00:08:02.959 ************************************ 00:08:02.959 00:08:02.959 real 0m41.200s 00:08:02.959 user 1m20.524s 00:08:02.959 sys 0m7.722s 00:08:02.959 13:35:55 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:02.959 13:35:55 event -- common/autotest_common.sh@10 -- # set +x 00:08:02.959 ************************************ 00:08:02.959 END TEST event 00:08:02.959 ************************************ 00:08:02.959 13:35:55 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:02.959 13:35:55 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:02.959 13:35:55 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:02.959 13:35:55 -- common/autotest_common.sh@10 -- # set +x 00:08:02.959 ************************************ 00:08:02.959 START TEST thread 00:08:02.959 ************************************ 00:08:02.959 13:35:55 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:08:02.959 * Looking for test storage... 00:08:02.959 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:08:02.959 13:35:55 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:02.959 13:35:55 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:08:02.959 13:35:55 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:02.959 13:35:55 thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.220 ************************************ 00:08:03.220 START TEST thread_poller_perf 00:08:03.220 ************************************ 00:08:03.220 13:35:55 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:03.220 [2024-06-11 13:35:55.920658] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:03.220 [2024-06-11 13:35:55.920755] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915332 ] 00:08:03.220 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.220 [2024-06-11 13:35:55.986053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.220 [2024-06-11 13:35:56.052363] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.220 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:04.606 ====================================== 00:08:04.606 busy:2412432932 (cyc) 00:08:04.606 total_run_count: 288000 00:08:04.606 tsc_hz: 2400000000 (cyc) 00:08:04.606 ====================================== 00:08:04.606 poller_cost: 8376 (cyc), 3490 (nsec) 00:08:04.606 00:08:04.606 real 0m1.216s 00:08:04.606 user 0m1.139s 00:08:04.606 sys 0m0.072s 00:08:04.606 13:35:57 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:04.606 13:35:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 ************************************ 00:08:04.606 END TEST thread_poller_perf 00:08:04.606 ************************************ 00:08:04.606 13:35:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:04.606 13:35:57 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:08:04.606 13:35:57 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:04.606 13:35:57 thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.606 ************************************ 00:08:04.606 START TEST thread_poller_perf 00:08:04.606 ************************************ 00:08:04.606 13:35:57 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:04.606 [2024-06-11 13:35:57.203771] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:04.606 [2024-06-11 13:35:57.203859] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915687 ] 00:08:04.606 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.606 [2024-06-11 13:35:57.277975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.606 [2024-06-11 13:35:57.346843] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.606 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:05.547 ====================================== 00:08:05.547 busy:2401926104 (cyc) 00:08:05.547 total_run_count: 3812000 00:08:05.547 tsc_hz: 2400000000 (cyc) 00:08:05.547 ====================================== 00:08:05.547 poller_cost: 630 (cyc), 262 (nsec) 00:08:05.547 00:08:05.547 real 0m1.218s 00:08:05.547 user 0m1.137s 00:08:05.547 sys 0m0.076s 00:08:05.547 13:35:58 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:05.547 13:35:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:05.547 ************************************ 00:08:05.547 END TEST thread_poller_perf 00:08:05.547 ************************************ 00:08:05.547 13:35:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:05.547 00:08:05.547 real 0m2.663s 00:08:05.547 user 0m2.365s 00:08:05.547 sys 0m0.301s 00:08:05.547 13:35:58 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:05.547 13:35:58 thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.547 ************************************ 00:08:05.547 END TEST thread 00:08:05.547 ************************************ 00:08:05.807 13:35:58 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:08:05.807 13:35:58 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:05.807 13:35:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:05.807 13:35:58 -- common/autotest_common.sh@10 -- # set +x 00:08:05.807 ************************************ 00:08:05.807 START TEST accel 00:08:05.807 ************************************ 00:08:05.807 13:35:58 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:08:05.807 * Looking for test storage... 00:08:05.807 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:08:05.807 13:35:58 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:05.807 13:35:58 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:05.807 13:35:58 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:05.807 13:35:58 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1916079 00:08:05.807 13:35:58 accel -- accel/accel.sh@63 -- # waitforlisten 1916079 00:08:05.807 13:35:58 accel -- common/autotest_common.sh@830 -- # '[' -z 1916079 ']' 00:08:05.807 13:35:58 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.807 13:35:58 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:05.807 13:35:58 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.807 13:35:58 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:05.807 13:35:58 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:05.807 13:35:58 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:05.807 13:35:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.807 13:35:58 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.807 13:35:58 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.807 13:35:58 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.807 13:35:58 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.807 13:35:58 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.807 13:35:58 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:05.807 13:35:58 accel -- accel/accel.sh@41 -- # jq -r . 00:08:05.807 [2024-06-11 13:35:58.669770] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:05.807 [2024-06-11 13:35:58.669837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916079 ] 00:08:05.807 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.067 [2024-06-11 13:35:58.736318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.067 [2024-06-11 13:35:58.809706] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.638 13:35:59 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:06.638 13:35:59 accel -- common/autotest_common.sh@863 -- # return 0 00:08:06.638 13:35:59 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:06.638 13:35:59 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:06.638 13:35:59 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:06.638 13:35:59 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:06.638 13:35:59 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:06.638 13:35:59 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:06.638 13:35:59 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:06.638 13:35:59 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:06.638 13:35:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.638 13:35:59 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # IFS== 00:08:06.638 13:35:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:06.638 13:35:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:06.638 13:35:59 accel -- accel/accel.sh@75 -- # killprocess 1916079 00:08:06.638 13:35:59 accel -- common/autotest_common.sh@949 -- # '[' -z 1916079 ']' 00:08:06.638 13:35:59 accel -- common/autotest_common.sh@953 -- # kill -0 1916079 00:08:06.638 13:35:59 accel -- common/autotest_common.sh@954 -- # uname 00:08:06.638 13:35:59 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:06.638 13:35:59 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1916079 00:08:06.899 13:35:59 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:06.899 13:35:59 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:06.899 13:35:59 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1916079' 00:08:06.899 killing process with pid 1916079 00:08:06.899 13:35:59 accel -- common/autotest_common.sh@968 -- # kill 1916079 00:08:06.899 13:35:59 accel -- common/autotest_common.sh@973 -- # wait 1916079 00:08:06.899 13:35:59 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:06.899 13:35:59 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:06.899 13:35:59 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:06.899 13:35:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:06.899 13:35:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.899 13:35:59 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:08:06.899 13:35:59 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:06.899 13:35:59 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:06.899 13:35:59 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.899 13:35:59 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.899 13:35:59 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.899 13:35:59 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.899 13:35:59 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.899 13:35:59 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:06.899 13:35:59 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:07.163 13:35:59 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:07.163 13:35:59 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:07.163 13:35:59 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:07.163 13:35:59 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:07.163 13:35:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:07.163 13:35:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.163 ************************************ 00:08:07.163 START TEST accel_missing_filename 00:08:07.163 ************************************ 00:08:07.163 13:35:59 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:08:07.163 13:35:59 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:08:07.163 13:35:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:07.163 13:35:59 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:08:07.163 13:35:59 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:07.163 13:35:59 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:08:07.163 13:35:59 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:07.163 13:35:59 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:08:07.163 13:35:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:07.163 13:35:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:07.163 13:35:59 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.163 13:35:59 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.163 13:35:59 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.163 13:35:59 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.163 13:35:59 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.163 13:35:59 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:07.163 13:35:59 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:07.163 [2024-06-11 13:35:59.928499] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:07.163 [2024-06-11 13:35:59.928548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916278 ] 00:08:07.163 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.163 [2024-06-11 13:35:59.987346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.163 [2024-06-11 13:36:00.053532] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.471 [2024-06-11 13:36:00.085578] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.471 [2024-06-11 13:36:00.124261] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:07.471 A filename is required. 00:08:07.471 13:36:00 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:08:07.471 13:36:00 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:07.471 13:36:00 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:08:07.471 13:36:00 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:08:07.472 13:36:00 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:08:07.472 13:36:00 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:07.472 00:08:07.472 real 0m0.266s 00:08:07.472 user 0m0.203s 00:08:07.472 sys 0m0.104s 00:08:07.472 13:36:00 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:07.472 13:36:00 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:07.472 ************************************ 00:08:07.472 END TEST accel_missing_filename 00:08:07.472 ************************************ 00:08:07.472 13:36:00 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:07.472 13:36:00 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:08:07.472 13:36:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:07.472 13:36:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.472 ************************************ 00:08:07.472 START TEST accel_compress_verify 00:08:07.472 ************************************ 00:08:07.472 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:07.472 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:08:07.472 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:07.472 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:08:07.472 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:07.472 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:08:07.472 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:07.472 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:07.472 13:36:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:07.472 13:36:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:07.472 13:36:00 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.472 13:36:00 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.472 13:36:00 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.472 13:36:00 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.472 13:36:00 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.472 13:36:00 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:07.472 13:36:00 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:07.472 [2024-06-11 13:36:00.282787] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:07.472 [2024-06-11 13:36:00.282856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916492 ] 00:08:07.472 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.472 [2024-06-11 13:36:00.344965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.754 [2024-06-11 13:36:00.411468] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.754 [2024-06-11 13:36:00.443311] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.754 [2024-06-11 13:36:00.480269] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:07.754 00:08:07.754 Compression does not support the verify option, aborting. 00:08:07.754 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:08:07.754 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:07.754 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:08:07.754 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:08:07.754 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:08:07.754 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:07.754 00:08:07.754 real 0m0.281s 00:08:07.754 user 0m0.223s 00:08:07.754 sys 0m0.098s 00:08:07.754 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:07.754 13:36:00 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:07.754 ************************************ 00:08:07.754 END TEST accel_compress_verify 00:08:07.754 ************************************ 00:08:07.754 13:36:00 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:07.754 13:36:00 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:07.754 13:36:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:07.754 13:36:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.754 ************************************ 00:08:07.754 START TEST accel_wrong_workload 00:08:07.754 ************************************ 00:08:07.754 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:08:07.754 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:08:07.754 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:07.754 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:08:07.754 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:07.754 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:08:07.754 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:07.754 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:08:07.754 13:36:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:07.754 13:36:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:07.754 13:36:00 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.754 13:36:00 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.754 13:36:00 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.754 13:36:00 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.754 13:36:00 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.754 13:36:00 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:07.754 13:36:00 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:07.754 Unsupported workload type: foobar 00:08:07.754 [2024-06-11 13:36:00.640749] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:07.754 accel_perf options: 00:08:07.754 [-h help message] 00:08:07.754 [-q queue depth per core] 00:08:07.754 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:07.754 [-T number of threads per core 00:08:07.754 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:07.754 [-t time in seconds] 00:08:07.754 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:07.754 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:07.754 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:07.754 [-l for compress/decompress workloads, name of uncompressed input file 00:08:07.754 [-S for crc32c workload, use this seed value (default 0) 00:08:07.754 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:07.754 [-f for fill workload, use this BYTE value (default 255) 00:08:07.754 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:07.754 [-y verify result if this switch is on] 00:08:07.755 [-a tasks to allocate per core (default: same value as -q)] 00:08:07.755 Can be used to spread operations across a wider range of memory. 00:08:07.755 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:08:07.755 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:07.755 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:07.755 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:07.755 00:08:07.755 real 0m0.036s 00:08:07.755 user 0m0.022s 00:08:07.755 sys 0m0.014s 00:08:07.755 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:07.755 13:36:00 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:07.755 ************************************ 00:08:07.755 END TEST accel_wrong_workload 00:08:07.755 ************************************ 00:08:07.755 Error: writing output failed: Broken pipe 00:08:08.016 13:36:00 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:08.016 13:36:00 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:08:08.016 13:36:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.016 13:36:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 ************************************ 00:08:08.016 START TEST accel_negative_buffers 00:08:08.016 ************************************ 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:08:08.016 13:36:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:08.016 13:36:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:08.016 13:36:00 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.016 13:36:00 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.016 13:36:00 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.016 13:36:00 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.016 13:36:00 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.016 13:36:00 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:08.016 13:36:00 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:08.016 -x option must be non-negative. 00:08:08.016 [2024-06-11 13:36:00.752588] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:08.016 accel_perf options: 00:08:08.016 [-h help message] 00:08:08.016 [-q queue depth per core] 00:08:08.016 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:08.016 [-T number of threads per core 00:08:08.016 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:08.016 [-t time in seconds] 00:08:08.016 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:08.016 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:08.016 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:08.016 [-l for compress/decompress workloads, name of uncompressed input file 00:08:08.016 [-S for crc32c workload, use this seed value (default 0) 00:08:08.016 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:08.016 [-f for fill workload, use this BYTE value (default 255) 00:08:08.016 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:08.016 [-y verify result if this switch is on] 00:08:08.016 [-a tasks to allocate per core (default: same value as -q)] 00:08:08.016 Can be used to spread operations across a wider range of memory. 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:08.016 00:08:08.016 real 0m0.038s 00:08:08.016 user 0m0.025s 00:08:08.016 sys 0m0.013s 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:08.016 13:36:00 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 ************************************ 00:08:08.016 END TEST accel_negative_buffers 00:08:08.016 ************************************ 00:08:08.016 Error: writing output failed: Broken pipe 00:08:08.016 13:36:00 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:08.016 13:36:00 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:08.016 13:36:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:08.016 13:36:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 ************************************ 00:08:08.016 START TEST accel_crc32c 00:08:08.016 ************************************ 00:08:08.016 13:36:00 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:08.016 13:36:00 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:08.016 [2024-06-11 13:36:00.864494] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:08.016 [2024-06-11 13:36:00.864574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916580 ] 00:08:08.016 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.016 [2024-06-11 13:36:00.927538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.277 [2024-06-11 13:36:00.993791] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:08.277 13:36:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:09.219 13:36:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.219 00:08:09.219 real 0m1.286s 00:08:09.219 user 0m1.185s 00:08:09.219 sys 0m0.111s 00:08:09.219 13:36:02 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:09.219 13:36:02 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:09.219 ************************************ 00:08:09.219 END TEST accel_crc32c 00:08:09.219 ************************************ 00:08:09.478 13:36:02 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:09.478 13:36:02 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:09.478 13:36:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:09.478 13:36:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.478 ************************************ 00:08:09.478 START TEST accel_crc32c_C2 00:08:09.478 ************************************ 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:09.478 [2024-06-11 13:36:02.224528] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:09.478 [2024-06-11 13:36:02.224595] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916998 ] 00:08:09.478 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.478 [2024-06-11 13:36:02.288847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.478 [2024-06-11 13:36:02.357690] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.478 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.737 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:09.738 13:36:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.678 00:08:10.678 real 0m1.290s 00:08:10.678 user 0m1.197s 00:08:10.678 sys 0m0.104s 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:10.678 13:36:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:10.678 ************************************ 00:08:10.678 END TEST accel_crc32c_C2 00:08:10.678 ************************************ 00:08:10.678 13:36:03 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:10.678 13:36:03 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:10.678 13:36:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:10.678 13:36:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.678 ************************************ 00:08:10.678 START TEST accel_copy 00:08:10.678 ************************************ 00:08:10.678 13:36:03 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:10.678 13:36:03 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:10.938 [2024-06-11 13:36:03.590431] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:10.938 [2024-06-11 13:36:03.590527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917336 ] 00:08:10.938 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.938 [2024-06-11 13:36:03.654821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.938 [2024-06-11 13:36:03.724787] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.938 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:10.939 13:36:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:12.322 13:36:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:12.323 13:36:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:12.323 13:36:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.323 00:08:12.323 real 0m1.293s 00:08:12.323 user 0m1.200s 00:08:12.323 sys 0m0.104s 00:08:12.323 13:36:04 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:12.323 13:36:04 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:12.323 ************************************ 00:08:12.323 END TEST accel_copy 00:08:12.323 ************************************ 00:08:12.323 13:36:04 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:12.323 13:36:04 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:08:12.323 13:36:04 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:12.323 13:36:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.323 ************************************ 00:08:12.323 START TEST accel_fill 00:08:12.323 ************************************ 00:08:12.323 13:36:04 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:12.323 13:36:04 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:12.323 [2024-06-11 13:36:04.956146] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:12.323 [2024-06-11 13:36:04.956233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917526 ] 00:08:12.323 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.323 [2024-06-11 13:36:05.021866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.323 [2024-06-11 13:36:05.092167] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:12.323 13:36:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:13.705 13:36:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.705 00:08:13.705 real 0m1.294s 00:08:13.705 user 0m1.203s 00:08:13.705 sys 0m0.103s 00:08:13.705 13:36:06 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:13.705 13:36:06 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:13.705 ************************************ 00:08:13.705 END TEST accel_fill 00:08:13.705 ************************************ 00:08:13.705 13:36:06 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:13.705 13:36:06 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:13.705 13:36:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:13.705 13:36:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.705 ************************************ 00:08:13.705 START TEST accel_copy_crc32c 00:08:13.705 ************************************ 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:13.705 [2024-06-11 13:36:06.328314] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:13.705 [2024-06-11 13:36:06.328405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917740 ] 00:08:13.705 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.705 [2024-06-11 13:36:06.396815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.705 [2024-06-11 13:36:06.471060] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.705 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.706 13:36:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.087 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.088 00:08:15.088 real 0m1.303s 00:08:15.088 user 0m1.204s 00:08:15.088 sys 0m0.110s 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:15.088 13:36:07 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:15.088 ************************************ 00:08:15.088 END TEST accel_copy_crc32c 00:08:15.088 ************************************ 00:08:15.088 13:36:07 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:15.088 13:36:07 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:15.088 13:36:07 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:15.088 13:36:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.088 ************************************ 00:08:15.088 START TEST accel_copy_crc32c_C2 00:08:15.088 ************************************ 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:15.088 [2024-06-11 13:36:07.702896] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:15.088 [2024-06-11 13:36:07.703012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918092 ] 00:08:15.088 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.088 [2024-06-11 13:36:07.774070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.088 [2024-06-11 13:36:07.845332] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:15.088 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.089 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.089 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.089 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.089 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.089 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.089 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.089 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.089 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.089 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.089 13:36:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.472 00:08:16.472 real 0m1.302s 00:08:16.472 user 0m1.206s 00:08:16.472 sys 0m0.108s 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:16.472 13:36:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:16.472 ************************************ 00:08:16.472 END TEST accel_copy_crc32c_C2 00:08:16.472 ************************************ 00:08:16.472 13:36:09 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:16.473 13:36:09 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:16.473 13:36:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:16.473 13:36:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.473 ************************************ 00:08:16.473 START TEST accel_dualcast 00:08:16.473 ************************************ 00:08:16.473 13:36:09 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:16.473 [2024-06-11 13:36:09.077683] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:16.473 [2024-06-11 13:36:09.077759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918650 ] 00:08:16.473 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.473 [2024-06-11 13:36:09.142762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.473 [2024-06-11 13:36:09.212649] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:16.473 13:36:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:17.855 13:36:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.855 00:08:17.855 real 0m1.293s 00:08:17.855 user 0m1.188s 00:08:17.855 sys 0m0.115s 00:08:17.855 13:36:10 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:17.855 13:36:10 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:17.855 ************************************ 00:08:17.855 END TEST accel_dualcast 00:08:17.855 ************************************ 00:08:17.855 13:36:10 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:17.855 13:36:10 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:17.855 13:36:10 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:17.855 13:36:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.855 ************************************ 00:08:17.855 START TEST accel_compare 00:08:17.855 ************************************ 00:08:17.855 13:36:10 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:17.855 [2024-06-11 13:36:10.445045] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:17.855 [2024-06-11 13:36:10.445132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919188 ] 00:08:17.855 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.855 [2024-06-11 13:36:10.509452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.855 [2024-06-11 13:36:10.579175] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:17.855 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:17.856 13:36:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:18.792 13:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:18.792 13:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:18.792 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:19.051 13:36:11 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.051 00:08:19.051 real 0m1.292s 00:08:19.051 user 0m1.202s 00:08:19.051 sys 0m0.100s 00:08:19.051 13:36:11 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:19.051 13:36:11 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:19.051 ************************************ 00:08:19.051 END TEST accel_compare 00:08:19.051 ************************************ 00:08:19.051 13:36:11 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:19.051 13:36:11 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:08:19.051 13:36:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:19.051 13:36:11 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.051 ************************************ 00:08:19.051 START TEST accel_xor 00:08:19.051 ************************************ 00:08:19.051 13:36:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:19.051 13:36:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:19.051 [2024-06-11 13:36:11.814437] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:19.052 [2024-06-11 13:36:11.814522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919454 ] 00:08:19.052 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.052 [2024-06-11 13:36:11.878794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.052 [2024-06-11 13:36:11.947102] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.310 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:19.311 13:36:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.249 00:08:20.249 real 0m1.291s 00:08:20.249 user 0m1.197s 00:08:20.249 sys 0m0.104s 00:08:20.249 13:36:13 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:20.249 13:36:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:20.249 ************************************ 00:08:20.249 END TEST accel_xor 00:08:20.249 ************************************ 00:08:20.249 13:36:13 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:20.249 13:36:13 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:20.249 13:36:13 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:20.249 13:36:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.249 ************************************ 00:08:20.249 START TEST accel_xor 00:08:20.249 ************************************ 00:08:20.249 13:36:13 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:20.249 13:36:13 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:20.510 [2024-06-11 13:36:13.179450] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:20.510 [2024-06-11 13:36:13.179544] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919643 ] 00:08:20.510 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.510 [2024-06-11 13:36:13.244446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.510 [2024-06-11 13:36:13.313336] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:20.510 13:36:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:21.890 13:36:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.890 00:08:21.890 real 0m1.293s 00:08:21.890 user 0m1.207s 00:08:21.890 sys 0m0.097s 00:08:21.890 13:36:14 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:21.890 13:36:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:21.890 ************************************ 00:08:21.890 END TEST accel_xor 00:08:21.890 ************************************ 00:08:21.890 13:36:14 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:21.890 13:36:14 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:21.890 13:36:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:21.890 13:36:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.890 ************************************ 00:08:21.890 START TEST accel_dif_verify 00:08:21.890 ************************************ 00:08:21.890 13:36:14 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:21.890 13:36:14 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:21.890 [2024-06-11 13:36:14.545064] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:21.890 [2024-06-11 13:36:14.545128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919979 ] 00:08:21.890 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.891 [2024-06-11 13:36:14.607487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.891 [2024-06-11 13:36:14.672731] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:21.891 13:36:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:23.273 13:36:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.273 00:08:23.273 real 0m1.285s 00:08:23.273 user 0m1.196s 00:08:23.273 sys 0m0.102s 00:08:23.273 13:36:15 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:23.273 13:36:15 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:23.273 ************************************ 00:08:23.273 END TEST accel_dif_verify 00:08:23.273 ************************************ 00:08:23.273 13:36:15 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:23.273 13:36:15 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:23.273 13:36:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:23.273 13:36:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.273 ************************************ 00:08:23.274 START TEST accel_dif_generate 00:08:23.274 ************************************ 00:08:23.274 13:36:15 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:23.274 13:36:15 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:23.274 [2024-06-11 13:36:15.905252] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:23.274 [2024-06-11 13:36:15.905315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920326 ] 00:08:23.274 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.274 [2024-06-11 13:36:15.968512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.274 [2024-06-11 13:36:16.036750] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:23.274 13:36:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:24.657 13:36:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.657 00:08:24.657 real 0m1.291s 00:08:24.657 user 0m1.195s 00:08:24.657 sys 0m0.109s 00:08:24.657 13:36:17 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:24.657 13:36:17 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:24.657 ************************************ 00:08:24.657 END TEST accel_dif_generate 00:08:24.657 ************************************ 00:08:24.657 13:36:17 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:24.657 13:36:17 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:08:24.657 13:36:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:24.657 13:36:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.657 ************************************ 00:08:24.657 START TEST accel_dif_generate_copy 00:08:24.657 ************************************ 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:24.657 [2024-06-11 13:36:17.268957] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:24.657 [2024-06-11 13:36:17.269033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920677 ] 00:08:24.657 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.657 [2024-06-11 13:36:17.331055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.657 [2024-06-11 13:36:17.395177] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.657 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:24.658 13:36:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.042 00:08:26.042 real 0m1.284s 00:08:26.042 user 0m1.199s 00:08:26.042 sys 0m0.097s 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:26.042 13:36:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:26.042 ************************************ 00:08:26.042 END TEST accel_dif_generate_copy 00:08:26.042 ************************************ 00:08:26.042 13:36:18 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:26.042 13:36:18 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:26.042 13:36:18 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:08:26.042 13:36:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:26.042 13:36:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.042 ************************************ 00:08:26.042 START TEST accel_comp 00:08:26.042 ************************************ 00:08:26.042 13:36:18 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:26.042 [2024-06-11 13:36:18.629112] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:26.042 [2024-06-11 13:36:18.629180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920902 ] 00:08:26.042 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.042 [2024-06-11 13:36:18.693561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.042 [2024-06-11 13:36:18.764100] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.042 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.043 13:36:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:26.982 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:27.243 13:36:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.243 00:08:27.243 real 0m1.297s 00:08:27.243 user 0m1.199s 00:08:27.243 sys 0m0.111s 00:08:27.243 13:36:19 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:27.243 13:36:19 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:27.243 ************************************ 00:08:27.243 END TEST accel_comp 00:08:27.243 ************************************ 00:08:27.243 13:36:19 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:27.243 13:36:19 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:08:27.243 13:36:19 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:27.243 13:36:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.243 ************************************ 00:08:27.243 START TEST accel_decomp 00:08:27.243 ************************************ 00:08:27.243 13:36:19 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:27.243 13:36:19 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:27.243 [2024-06-11 13:36:19.998145] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:27.243 [2024-06-11 13:36:19.998207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921092 ] 00:08:27.243 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.243 [2024-06-11 13:36:20.062585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.243 [2024-06-11 13:36:20.131660] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:27.504 13:36:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:28.448 13:36:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.448 00:08:28.448 real 0m1.293s 00:08:28.448 user 0m1.198s 00:08:28.448 sys 0m0.107s 00:08:28.448 13:36:21 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:28.448 13:36:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:28.448 ************************************ 00:08:28.448 END TEST accel_decomp 00:08:28.448 ************************************ 00:08:28.448 13:36:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:28.448 13:36:21 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:28.448 13:36:21 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:28.448 13:36:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.448 ************************************ 00:08:28.448 START TEST accel_decomp_full 00:08:28.448 ************************************ 00:08:28.448 13:36:21 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:28.448 13:36:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:28.710 [2024-06-11 13:36:21.367741] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:28.710 [2024-06-11 13:36:21.367833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921414 ] 00:08:28.710 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.710 [2024-06-11 13:36:21.430518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.710 [2024-06-11 13:36:21.496009] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.710 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.710 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.710 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.710 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:28.711 13:36:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:30.100 13:36:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.100 00:08:30.100 real 0m1.300s 00:08:30.100 user 0m1.216s 00:08:30.100 sys 0m0.096s 00:08:30.100 13:36:22 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:30.100 13:36:22 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:30.100 ************************************ 00:08:30.100 END TEST accel_decomp_full 00:08:30.100 ************************************ 00:08:30.100 13:36:22 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.100 13:36:22 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:30.100 13:36:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:30.100 13:36:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.100 ************************************ 00:08:30.100 START TEST accel_decomp_mcore 00:08:30.100 ************************************ 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:30.100 [2024-06-11 13:36:22.740840] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:30.100 [2024-06-11 13:36:22.740938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921772 ] 00:08:30.100 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.100 [2024-06-11 13:36:22.805052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.100 [2024-06-11 13:36:22.874990] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.100 [2024-06-11 13:36:22.875128] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.100 [2024-06-11 13:36:22.875370] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.100 [2024-06-11 13:36:22.875372] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.100 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:30.101 13:36:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.486 00:08:31.486 real 0m1.303s 00:08:31.486 user 0m4.432s 00:08:31.486 sys 0m0.116s 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:31.486 13:36:24 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:31.486 ************************************ 00:08:31.486 END TEST accel_decomp_mcore 00:08:31.486 ************************************ 00:08:31.487 13:36:24 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:31.487 13:36:24 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:08:31.487 13:36:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:31.487 13:36:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.487 ************************************ 00:08:31.487 START TEST accel_decomp_full_mcore 00:08:31.487 ************************************ 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:31.487 [2024-06-11 13:36:24.117257] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:31.487 [2024-06-11 13:36:24.117340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922122 ] 00:08:31.487 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.487 [2024-06-11 13:36:24.183104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.487 [2024-06-11 13:36:24.255916] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.487 [2024-06-11 13:36:24.256039] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.487 [2024-06-11 13:36:24.256139] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.487 [2024-06-11 13:36:24.256140] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:31.487 13:36:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:32.876 00:08:32.876 real 0m1.322s 00:08:32.876 user 0m4.491s 00:08:32.876 sys 0m0.119s 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:32.876 13:36:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:32.876 ************************************ 00:08:32.876 END TEST accel_decomp_full_mcore 00:08:32.876 ************************************ 00:08:32.876 13:36:25 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:32.876 13:36:25 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:08:32.876 13:36:25 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:32.876 13:36:25 accel -- common/autotest_common.sh@10 -- # set +x 00:08:32.876 ************************************ 00:08:32.876 START TEST accel_decomp_mthread 00:08:32.876 ************************************ 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:32.876 [2024-06-11 13:36:25.513428] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:32.876 [2024-06-11 13:36:25.513490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922409 ] 00:08:32.876 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.876 [2024-06-11 13:36:25.577471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.876 [2024-06-11 13:36:25.646578] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:32.876 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:32.877 13:36:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.288 00:08:34.288 real 0m1.298s 00:08:34.288 user 0m1.208s 00:08:34.288 sys 0m0.103s 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:34.288 13:36:26 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:34.288 ************************************ 00:08:34.288 END TEST accel_decomp_mthread 00:08:34.288 ************************************ 00:08:34.288 13:36:26 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:34.288 13:36:26 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:08:34.288 13:36:26 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:34.288 13:36:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.288 ************************************ 00:08:34.288 START TEST accel_decomp_full_mthread 00:08:34.288 ************************************ 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:34.289 13:36:26 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:34.289 [2024-06-11 13:36:26.886343] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:34.289 [2024-06-11 13:36:26.886405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922623 ] 00:08:34.289 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.289 [2024-06-11 13:36:26.948591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.289 [2024-06-11 13:36:27.013583] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:34.289 13:36:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:35.675 00:08:35.675 real 0m1.320s 00:08:35.675 user 0m1.239s 00:08:35.675 sys 0m0.095s 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:35.675 13:36:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:35.675 ************************************ 00:08:35.675 END TEST accel_decomp_full_mthread 00:08:35.675 ************************************ 00:08:35.675 13:36:28 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:35.675 13:36:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:35.675 13:36:28 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:35.675 13:36:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:35.675 13:36:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:35.675 13:36:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.675 13:36:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.675 13:36:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.675 13:36:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.675 13:36:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.675 13:36:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.675 13:36:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:35.675 13:36:28 accel -- accel/accel.sh@41 -- # jq -r . 00:08:35.675 ************************************ 00:08:35.675 START TEST accel_dif_functional_tests 00:08:35.675 ************************************ 00:08:35.675 13:36:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:35.675 [2024-06-11 13:36:28.298914] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:35.675 [2024-06-11 13:36:28.298964] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922867 ] 00:08:35.675 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.675 [2024-06-11 13:36:28.362955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:35.675 [2024-06-11 13:36:28.440614] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.675 [2024-06-11 13:36:28.440733] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.675 [2024-06-11 13:36:28.440736] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.675 00:08:35.675 00:08:35.675 CUnit - A unit testing framework for C - Version 2.1-3 00:08:35.675 http://cunit.sourceforge.net/ 00:08:35.675 00:08:35.675 00:08:35.675 Suite: accel_dif 00:08:35.675 Test: verify: DIF generated, GUARD check ...passed 00:08:35.675 Test: verify: DIF generated, APPTAG check ...passed 00:08:35.675 Test: verify: DIF generated, REFTAG check ...passed 00:08:35.675 Test: verify: DIF not generated, GUARD check ...[2024-06-11 13:36:28.496576] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:35.675 passed 00:08:35.675 Test: verify: DIF not generated, APPTAG check ...[2024-06-11 13:36:28.496621] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:35.675 passed 00:08:35.675 Test: verify: DIF not generated, REFTAG check ...[2024-06-11 13:36:28.496644] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:35.675 passed 00:08:35.675 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:35.675 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-11 13:36:28.496693] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:35.675 passed 00:08:35.675 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:35.675 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:35.675 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:35.675 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-11 13:36:28.496805] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:35.675 passed 00:08:35.675 Test: verify copy: DIF generated, GUARD check ...passed 00:08:35.675 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:35.675 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:35.676 Test: verify copy: DIF not generated, GUARD check ...[2024-06-11 13:36:28.496932] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:35.676 passed 00:08:35.676 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-11 13:36:28.496955] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:35.676 passed 00:08:35.676 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-11 13:36:28.496976] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:35.676 passed 00:08:35.676 Test: generate copy: DIF generated, GUARD check ...passed 00:08:35.676 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:35.676 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:35.676 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:35.676 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:35.676 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:35.676 Test: generate copy: iovecs-len validate ...[2024-06-11 13:36:28.497165] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:35.676 passed 00:08:35.676 Test: generate copy: buffer alignment validate ...passed 00:08:35.676 00:08:35.676 Run Summary: Type Total Ran Passed Failed Inactive 00:08:35.676 suites 1 1 n/a 0 0 00:08:35.676 tests 26 26 26 0 0 00:08:35.676 asserts 115 115 115 0 n/a 00:08:35.676 00:08:35.676 Elapsed time = 0.002 seconds 00:08:35.937 00:08:35.937 real 0m0.359s 00:08:35.937 user 0m0.505s 00:08:35.937 sys 0m0.119s 00:08:35.937 13:36:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:35.937 13:36:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:35.937 ************************************ 00:08:35.937 END TEST accel_dif_functional_tests 00:08:35.937 ************************************ 00:08:35.937 00:08:35.937 real 0m30.143s 00:08:35.937 user 0m33.757s 00:08:35.937 sys 0m4.136s 00:08:35.937 13:36:28 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:35.937 13:36:28 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.937 ************************************ 00:08:35.937 END TEST accel 00:08:35.937 ************************************ 00:08:35.937 13:36:28 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:35.937 13:36:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:35.937 13:36:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:35.937 13:36:28 -- common/autotest_common.sh@10 -- # set +x 00:08:35.937 ************************************ 00:08:35.937 START TEST accel_rpc 00:08:35.937 ************************************ 00:08:35.937 13:36:28 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:35.937 * Looking for test storage... 00:08:35.937 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:08:35.937 13:36:28 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:35.937 13:36:28 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1923175 00:08:35.937 13:36:28 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1923175 00:08:35.937 13:36:28 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:35.937 13:36:28 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 1923175 ']' 00:08:35.937 13:36:28 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.938 13:36:28 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:35.938 13:36:28 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.938 13:36:28 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:35.938 13:36:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.199 [2024-06-11 13:36:28.887891] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:36.199 [2024-06-11 13:36:28.887966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923175 ] 00:08:36.199 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.199 [2024-06-11 13:36:28.953443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.199 [2024-06-11 13:36:29.028063] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.771 13:36:29 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:36.771 13:36:29 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:36.771 13:36:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:36.771 13:36:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:36.771 13:36:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:36.771 13:36:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:36.771 13:36:29 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:36.771 13:36:29 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:36.771 13:36:29 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:36.771 13:36:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.771 ************************************ 00:08:36.771 START TEST accel_assign_opcode 00:08:36.771 ************************************ 00:08:36.771 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:08:36.771 13:36:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:36.771 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:36.771 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:36.771 [2024-06-11 13:36:29.681978] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 [2024-06-11 13:36:29.694001] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:37.032 software 00:08:37.032 00:08:37.032 real 0m0.210s 00:08:37.032 user 0m0.052s 00:08:37.032 sys 0m0.008s 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:37.032 13:36:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:37.032 ************************************ 00:08:37.032 END TEST accel_assign_opcode 00:08:37.032 ************************************ 00:08:37.032 13:36:29 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1923175 00:08:37.032 13:36:29 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 1923175 ']' 00:08:37.032 13:36:29 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 1923175 00:08:37.032 13:36:29 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:08:37.032 13:36:29 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:37.032 13:36:29 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1923175 00:08:37.294 13:36:29 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:37.294 13:36:29 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:37.294 13:36:29 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1923175' 00:08:37.294 killing process with pid 1923175 00:08:37.294 13:36:29 accel_rpc -- common/autotest_common.sh@968 -- # kill 1923175 00:08:37.294 13:36:29 accel_rpc -- common/autotest_common.sh@973 -- # wait 1923175 00:08:37.294 00:08:37.294 real 0m1.468s 00:08:37.294 user 0m1.548s 00:08:37.294 sys 0m0.402s 00:08:37.294 13:36:30 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:37.294 13:36:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.294 ************************************ 00:08:37.294 END TEST accel_rpc 00:08:37.294 ************************************ 00:08:37.557 13:36:30 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:37.557 13:36:30 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:37.557 13:36:30 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:37.557 13:36:30 -- common/autotest_common.sh@10 -- # set +x 00:08:37.557 ************************************ 00:08:37.557 START TEST app_cmdline 00:08:37.557 ************************************ 00:08:37.557 13:36:30 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:37.557 * Looking for test storage... 00:08:37.557 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:37.557 13:36:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:37.557 13:36:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1923509 00:08:37.557 13:36:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1923509 00:08:37.557 13:36:30 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:37.557 13:36:30 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 1923509 ']' 00:08:37.557 13:36:30 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.557 13:36:30 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:37.557 13:36:30 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.557 13:36:30 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:37.557 13:36:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:37.557 [2024-06-11 13:36:30.428865] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:37.557 [2024-06-11 13:36:30.428936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923509 ] 00:08:37.557 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.817 [2024-06-11 13:36:30.494192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.817 [2024-06-11 13:36:30.570499] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.388 13:36:31 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:38.388 13:36:31 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:08:38.388 13:36:31 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:38.649 { 00:08:38.649 "version": "SPDK v24.09-pre git sha1 9ccef4907", 00:08:38.649 "fields": { 00:08:38.649 "major": 24, 00:08:38.649 "minor": 9, 00:08:38.649 "patch": 0, 00:08:38.649 "suffix": "-pre", 00:08:38.649 "commit": "9ccef4907" 00:08:38.649 } 00:08:38.649 } 00:08:38.649 13:36:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:38.649 13:36:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:38.649 13:36:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:38.649 13:36:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:38.649 13:36:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:38.649 13:36:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.649 13:36:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:38.649 13:36:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:38.649 13:36:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:38.649 13:36:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:38.649 request: 00:08:38.649 { 00:08:38.649 "method": "env_dpdk_get_mem_stats", 00:08:38.649 "req_id": 1 00:08:38.649 } 00:08:38.649 Got JSON-RPC error response 00:08:38.649 response: 00:08:38.649 { 00:08:38.649 "code": -32601, 00:08:38.649 "message": "Method not found" 00:08:38.649 } 00:08:38.649 13:36:31 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:38.909 13:36:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1923509 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 1923509 ']' 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 1923509 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1923509 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1923509' 00:08:38.909 killing process with pid 1923509 00:08:38.909 13:36:31 app_cmdline -- common/autotest_common.sh@968 -- # kill 1923509 00:08:38.910 13:36:31 app_cmdline -- common/autotest_common.sh@973 -- # wait 1923509 00:08:39.171 00:08:39.171 real 0m1.550s 00:08:39.171 user 0m1.838s 00:08:39.171 sys 0m0.414s 00:08:39.171 13:36:31 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:39.171 13:36:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.171 ************************************ 00:08:39.171 END TEST app_cmdline 00:08:39.171 ************************************ 00:08:39.171 13:36:31 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:39.171 13:36:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:39.171 13:36:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:39.171 13:36:31 -- common/autotest_common.sh@10 -- # set +x 00:08:39.171 ************************************ 00:08:39.171 START TEST version 00:08:39.171 ************************************ 00:08:39.171 13:36:31 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:39.171 * Looking for test storage... 00:08:39.171 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:39.171 13:36:31 version -- app/version.sh@17 -- # get_header_version major 00:08:39.171 13:36:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:39.171 13:36:31 version -- app/version.sh@14 -- # cut -f2 00:08:39.171 13:36:31 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.171 13:36:32 version -- app/version.sh@17 -- # major=24 00:08:39.171 13:36:32 version -- app/version.sh@18 -- # get_header_version minor 00:08:39.171 13:36:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:39.171 13:36:32 version -- app/version.sh@14 -- # cut -f2 00:08:39.171 13:36:32 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.171 13:36:32 version -- app/version.sh@18 -- # minor=9 00:08:39.171 13:36:32 version -- app/version.sh@19 -- # get_header_version patch 00:08:39.171 13:36:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:39.171 13:36:32 version -- app/version.sh@14 -- # cut -f2 00:08:39.171 13:36:32 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.171 13:36:32 version -- app/version.sh@19 -- # patch=0 00:08:39.171 13:36:32 version -- app/version.sh@20 -- # get_header_version suffix 00:08:39.171 13:36:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:39.171 13:36:32 version -- app/version.sh@14 -- # cut -f2 00:08:39.171 13:36:32 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.171 13:36:32 version -- app/version.sh@20 -- # suffix=-pre 00:08:39.171 13:36:32 version -- app/version.sh@22 -- # version=24.9 00:08:39.171 13:36:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:39.171 13:36:32 version -- app/version.sh@28 -- # version=24.9rc0 00:08:39.171 13:36:32 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:39.171 13:36:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:39.171 13:36:32 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:39.171 13:36:32 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:39.171 00:08:39.171 real 0m0.182s 00:08:39.171 user 0m0.090s 00:08:39.171 sys 0m0.132s 00:08:39.171 13:36:32 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:39.171 13:36:32 version -- common/autotest_common.sh@10 -- # set +x 00:08:39.171 ************************************ 00:08:39.171 END TEST version 00:08:39.171 ************************************ 00:08:39.431 13:36:32 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:39.431 13:36:32 -- spdk/autotest.sh@198 -- # uname -s 00:08:39.431 13:36:32 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:39.431 13:36:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:39.431 13:36:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:39.431 13:36:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:39.431 13:36:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:39.431 13:36:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:39.431 13:36:32 -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:39.431 13:36:32 -- common/autotest_common.sh@10 -- # set +x 00:08:39.431 13:36:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:39.431 13:36:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:39.431 13:36:32 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:39.431 13:36:32 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:39.431 13:36:32 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:08:39.431 13:36:32 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:39.431 13:36:32 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:39.431 13:36:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:39.431 13:36:32 -- common/autotest_common.sh@10 -- # set +x 00:08:39.431 ************************************ 00:08:39.431 START TEST nvmf_rdma 00:08:39.431 ************************************ 00:08:39.431 13:36:32 nvmf_rdma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:39.431 * Looking for test storage... 00:08:39.431 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.431 13:36:32 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:39.432 13:36:32 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.432 13:36:32 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.432 13:36:32 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.432 13:36:32 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.432 13:36:32 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.432 13:36:32 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.432 13:36:32 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:08:39.432 13:36:32 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.432 13:36:32 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.692 13:36:32 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:39.692 13:36:32 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:39.692 13:36:32 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:39.693 13:36:32 nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:39.693 13:36:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:39.693 13:36:32 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:39.693 13:36:32 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:39.693 13:36:32 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:39.693 13:36:32 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:39.693 13:36:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:39.693 ************************************ 00:08:39.693 START TEST nvmf_example 00:08:39.693 ************************************ 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:39.693 * Looking for test storage... 00:08:39.693 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:39.693 13:36:32 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:47.831 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:47.831 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:47.831 Found net devices under 0000:98:00.0: mlx_0_0 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:47.831 Found net devices under 0000:98:00.1: mlx_0_1 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:47.831 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.831 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:08:47.831 altname enp152s0f0np0 00:08:47.831 altname ens817f0np0 00:08:47.831 inet 192.168.100.8/24 scope global mlx_0_0 00:08:47.831 valid_lft forever preferred_lft forever 00:08:47.831 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:47.832 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:47.832 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:08:47.832 altname enp152s0f1np1 00:08:47.832 altname ens817f1np1 00:08:47.832 inet 192.168.100.9/24 scope global mlx_0_1 00:08:47.832 valid_lft forever preferred_lft forever 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:47.832 192.168.100.9' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:47.832 192.168.100.9' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:47.832 192.168.100.9' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1927835 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1927835 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 1927835 ']' 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:47.832 13:36:39 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.832 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.832 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:47.832 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:08:47.832 13:36:40 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:47.832 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:47.832 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.832 13:36:40 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:47.832 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:47.832 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.093 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.093 13:36:40 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:48.093 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.093 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.093 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.093 13:36:40 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:48.093 13:36:40 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:48.094 13:36:40 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:48.094 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.327 Initializing NVMe Controllers 00:09:00.327 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:00.327 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:00.327 Initialization complete. Launching workers. 00:09:00.327 ======================================================== 00:09:00.327 Latency(us) 00:09:00.327 Device Information : IOPS MiB/s Average min max 00:09:00.327 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 27459.78 107.26 2330.45 689.83 16023.59 00:09:00.327 ======================================================== 00:09:00.327 Total : 27459.78 107.26 2330.45 689.83 16023.59 00:09:00.328 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:00.328 rmmod nvme_rdma 00:09:00.328 rmmod nvme_fabrics 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1927835 ']' 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1927835 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 1927835 ']' 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 1927835 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1927835 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1927835' 00:09:00.328 killing process with pid 1927835 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@968 -- # kill 1927835 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@973 -- # wait 1927835 00:09:00.328 nvmf threads initialize successfully 00:09:00.328 bdev subsystem init successfully 00:09:00.328 created a nvmf target service 00:09:00.328 create targets's poll groups done 00:09:00.328 all subsystems of target started 00:09:00.328 nvmf target is running 00:09:00.328 all subsystems of target stopped 00:09:00.328 destroy targets's poll groups done 00:09:00.328 destroyed the nvmf target service 00:09:00.328 bdev subsystem finish successfully 00:09:00.328 nvmf threads destroy successfully 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:00.328 00:09:00.328 real 0m20.068s 00:09:00.328 user 0m52.412s 00:09:00.328 sys 0m5.701s 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:00.328 13:36:52 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:00.328 ************************************ 00:09:00.328 END TEST nvmf_example 00:09:00.328 ************************************ 00:09:00.328 13:36:52 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:00.328 13:36:52 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:00.328 13:36:52 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:00.328 13:36:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:00.328 ************************************ 00:09:00.328 START TEST nvmf_filesystem 00:09:00.328 ************************************ 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:00.328 * Looking for test storage... 00:09:00.328 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:00.328 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:00.329 #define SPDK_CONFIG_H 00:09:00.329 #define SPDK_CONFIG_APPS 1 00:09:00.329 #define SPDK_CONFIG_ARCH native 00:09:00.329 #undef SPDK_CONFIG_ASAN 00:09:00.329 #undef SPDK_CONFIG_AVAHI 00:09:00.329 #undef SPDK_CONFIG_CET 00:09:00.329 #define SPDK_CONFIG_COVERAGE 1 00:09:00.329 #define SPDK_CONFIG_CROSS_PREFIX 00:09:00.329 #undef SPDK_CONFIG_CRYPTO 00:09:00.329 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:00.329 #undef SPDK_CONFIG_CUSTOMOCF 00:09:00.329 #undef SPDK_CONFIG_DAOS 00:09:00.329 #define SPDK_CONFIG_DAOS_DIR 00:09:00.329 #define SPDK_CONFIG_DEBUG 1 00:09:00.329 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:00.329 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:00.329 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:00.329 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:00.329 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:00.329 #undef SPDK_CONFIG_DPDK_UADK 00:09:00.329 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:00.329 #define SPDK_CONFIG_EXAMPLES 1 00:09:00.329 #undef SPDK_CONFIG_FC 00:09:00.329 #define SPDK_CONFIG_FC_PATH 00:09:00.329 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:00.329 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:00.329 #undef SPDK_CONFIG_FUSE 00:09:00.329 #undef SPDK_CONFIG_FUZZER 00:09:00.329 #define SPDK_CONFIG_FUZZER_LIB 00:09:00.329 #undef SPDK_CONFIG_GOLANG 00:09:00.329 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:00.329 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:00.329 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:00.329 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:00.329 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:00.329 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:00.329 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:00.329 #define SPDK_CONFIG_IDXD 1 00:09:00.329 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:00.329 #undef SPDK_CONFIG_IPSEC_MB 00:09:00.329 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:00.329 #define SPDK_CONFIG_ISAL 1 00:09:00.329 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:00.329 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:00.329 #define SPDK_CONFIG_LIBDIR 00:09:00.329 #undef SPDK_CONFIG_LTO 00:09:00.329 #define SPDK_CONFIG_MAX_LCORES 00:09:00.329 #define SPDK_CONFIG_NVME_CUSE 1 00:09:00.329 #undef SPDK_CONFIG_OCF 00:09:00.329 #define SPDK_CONFIG_OCF_PATH 00:09:00.329 #define SPDK_CONFIG_OPENSSL_PATH 00:09:00.329 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:00.329 #define SPDK_CONFIG_PGO_DIR 00:09:00.329 #undef SPDK_CONFIG_PGO_USE 00:09:00.329 #define SPDK_CONFIG_PREFIX /usr/local 00:09:00.329 #undef SPDK_CONFIG_RAID5F 00:09:00.329 #undef SPDK_CONFIG_RBD 00:09:00.329 #define SPDK_CONFIG_RDMA 1 00:09:00.329 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:00.329 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:00.329 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:00.329 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:00.329 #define SPDK_CONFIG_SHARED 1 00:09:00.329 #undef SPDK_CONFIG_SMA 00:09:00.329 #define SPDK_CONFIG_TESTS 1 00:09:00.329 #undef SPDK_CONFIG_TSAN 00:09:00.329 #define SPDK_CONFIG_UBLK 1 00:09:00.329 #define SPDK_CONFIG_UBSAN 1 00:09:00.329 #undef SPDK_CONFIG_UNIT_TESTS 00:09:00.329 #undef SPDK_CONFIG_URING 00:09:00.329 #define SPDK_CONFIG_URING_PATH 00:09:00.329 #undef SPDK_CONFIG_URING_ZNS 00:09:00.329 #undef SPDK_CONFIG_USDT 00:09:00.329 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:00.329 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:00.329 #undef SPDK_CONFIG_VFIO_USER 00:09:00.329 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:00.329 #define SPDK_CONFIG_VHOST 1 00:09:00.329 #define SPDK_CONFIG_VIRTIO 1 00:09:00.329 #undef SPDK_CONFIG_VTUNE 00:09:00.329 #define SPDK_CONFIG_VTUNE_DIR 00:09:00.329 #define SPDK_CONFIG_WERROR 1 00:09:00.329 #define SPDK_CONFIG_WPDK_DIR 00:09:00.329 #undef SPDK_CONFIG_XNVME 00:09:00.329 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.329 13:36:52 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:00.330 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1930437 ]] 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1930437 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:00.331 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.K1EyW7 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.K1EyW7/tests/target /tmp/spdk.K1EyW7 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=957403136 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327026688 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=123752255488 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370984448 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5618728960 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64682115072 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864441856 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9756672 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64685133824 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685494272 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=360448 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:00.332 * Looking for test storage... 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=123752255488 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7833321472 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.332 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:00.332 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.333 13:36:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:08.476 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:08.476 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:08.476 Found net devices under 0000:98:00.0: mlx_0_0 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:08.476 Found net devices under 0000:98:00.1: mlx_0_1 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:08.476 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:08.477 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:08.477 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:08.477 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:08.477 altname enp152s0f0np0 00:09:08.477 altname ens817f0np0 00:09:08.477 inet 192.168.100.8/24 scope global mlx_0_0 00:09:08.477 valid_lft forever preferred_lft forever 00:09:08.477 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:08.477 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:08.477 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:08.477 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:08.477 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:08.477 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:08.477 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:08.477 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:08.477 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:08.477 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:08.477 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:08.477 altname enp152s0f1np1 00:09:08.477 altname ens817f1np1 00:09:08.477 inet 192.168.100.9/24 scope global mlx_0_1 00:09:08.477 valid_lft forever preferred_lft forever 00:09:08.477 13:36:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:08.477 192.168.100.9' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:08.477 192.168.100.9' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:08.477 192.168.100.9' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.477 ************************************ 00:09:08.477 START TEST nvmf_filesystem_no_in_capsule 00:09:08.477 ************************************ 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1934303 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1934303 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1934303 ']' 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.477 [2024-06-11 13:37:00.205370] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:08.477 [2024-06-11 13:37:00.205420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.477 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.477 [2024-06-11 13:37:00.268384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.477 [2024-06-11 13:37:00.341163] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.477 [2024-06-11 13:37:00.341198] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.477 [2024-06-11 13:37:00.341206] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.477 [2024-06-11 13:37:00.341212] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.477 [2024-06-11 13:37:00.341218] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.477 [2024-06-11 13:37:00.341355] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.477 [2024-06-11 13:37:00.341475] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.477 [2024-06-11 13:37:00.341633] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.477 [2024-06-11 13:37:00.341634] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:08.477 13:37:00 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.477 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.477 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:08.477 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:08.477 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.478 [2024-06-11 13:37:01.031618] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:08.478 [2024-06-11 13:37:01.062735] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x123be90/0x1240380) succeed. 00:09:08.478 [2024-06-11 13:37:01.077400] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x123d4d0/0x1281a10) succeed. 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.478 Malloc1 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.478 [2024-06-11 13:37:01.317059] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:09:08.478 { 00:09:08.478 "name": "Malloc1", 00:09:08.478 "aliases": [ 00:09:08.478 "1953a38b-124b-4c30-8c4b-444433c8859b" 00:09:08.478 ], 00:09:08.478 "product_name": "Malloc disk", 00:09:08.478 "block_size": 512, 00:09:08.478 "num_blocks": 1048576, 00:09:08.478 "uuid": "1953a38b-124b-4c30-8c4b-444433c8859b", 00:09:08.478 "assigned_rate_limits": { 00:09:08.478 "rw_ios_per_sec": 0, 00:09:08.478 "rw_mbytes_per_sec": 0, 00:09:08.478 "r_mbytes_per_sec": 0, 00:09:08.478 "w_mbytes_per_sec": 0 00:09:08.478 }, 00:09:08.478 "claimed": true, 00:09:08.478 "claim_type": "exclusive_write", 00:09:08.478 "zoned": false, 00:09:08.478 "supported_io_types": { 00:09:08.478 "read": true, 00:09:08.478 "write": true, 00:09:08.478 "unmap": true, 00:09:08.478 "write_zeroes": true, 00:09:08.478 "flush": true, 00:09:08.478 "reset": true, 00:09:08.478 "compare": false, 00:09:08.478 "compare_and_write": false, 00:09:08.478 "abort": true, 00:09:08.478 "nvme_admin": false, 00:09:08.478 "nvme_io": false 00:09:08.478 }, 00:09:08.478 "memory_domains": [ 00:09:08.478 { 00:09:08.478 "dma_device_id": "system", 00:09:08.478 "dma_device_type": 1 00:09:08.478 }, 00:09:08.478 { 00:09:08.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.478 "dma_device_type": 2 00:09:08.478 } 00:09:08.478 ], 00:09:08.478 "driver_specific": {} 00:09:08.478 } 00:09:08.478 ]' 00:09:08.478 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:09:08.738 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:09:08.738 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:09:08.738 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:09:08.738 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:09:08.738 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:09:08.738 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:08.738 13:37:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:10.124 13:37:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.124 13:37:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:09:10.124 13:37:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.124 13:37:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:10.124 13:37:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:12.038 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:12.300 13:37:04 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:12.300 13:37:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.243 ************************************ 00:09:13.243 START TEST filesystem_ext4 00:09:13.243 ************************************ 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:13.243 mke2fs 1.46.5 (30-Dec-2021) 00:09:13.243 Discarding device blocks: 0/522240 done 00:09:13.243 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:13.243 Filesystem UUID: 9caf31d3-ba94-4f48-b4cf-e9693796701c 00:09:13.243 Superblock backups stored on blocks: 00:09:13.243 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:13.243 00:09:13.243 Allocating group tables: 0/64 done 00:09:13.243 Writing inode tables: 0/64 done 00:09:13.243 Creating journal (8192 blocks): done 00:09:13.243 Writing superblocks and filesystem accounting information: 0/64 done 00:09:13.243 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1934303 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:13.243 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:13.504 00:09:13.504 real 0m0.123s 00:09:13.504 user 0m0.031s 00:09:13.504 sys 0m0.039s 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:13.504 ************************************ 00:09:13.504 END TEST filesystem_ext4 00:09:13.504 ************************************ 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.504 ************************************ 00:09:13.504 START TEST filesystem_btrfs 00:09:13.504 ************************************ 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:09:13.504 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:13.504 btrfs-progs v6.6.2 00:09:13.504 See https://btrfs.readthedocs.io for more information. 00:09:13.504 00:09:13.504 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:13.504 NOTE: several default settings have changed in version 5.15, please make sure 00:09:13.504 this does not affect your deployments: 00:09:13.504 - DUP for metadata (-m dup) 00:09:13.504 - enabled no-holes (-O no-holes) 00:09:13.504 - enabled free-space-tree (-R free-space-tree) 00:09:13.504 00:09:13.504 Label: (null) 00:09:13.504 UUID: 555360f5-f9cb-4031-b791-12c7ab2cc9fa 00:09:13.504 Node size: 16384 00:09:13.504 Sector size: 4096 00:09:13.504 Filesystem size: 510.00MiB 00:09:13.504 Block group profiles: 00:09:13.504 Data: single 8.00MiB 00:09:13.504 Metadata: DUP 32.00MiB 00:09:13.505 System: DUP 8.00MiB 00:09:13.505 SSD detected: yes 00:09:13.505 Zoned device: no 00:09:13.505 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:13.505 Runtime features: free-space-tree 00:09:13.505 Checksum: crc32c 00:09:13.505 Number of devices: 1 00:09:13.505 Devices: 00:09:13.505 ID SIZE PATH 00:09:13.505 1 510.00MiB /dev/nvme0n1p1 00:09:13.505 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1934303 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:13.505 00:09:13.505 real 0m0.134s 00:09:13.505 user 0m0.028s 00:09:13.505 sys 0m0.058s 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:13.505 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:13.505 ************************************ 00:09:13.505 END TEST filesystem_btrfs 00:09:13.505 ************************************ 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.765 ************************************ 00:09:13.765 START TEST filesystem_xfs 00:09:13.765 ************************************ 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:13.765 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:13.765 = sectsz=512 attr=2, projid32bit=1 00:09:13.765 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:13.765 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:13.765 data = bsize=4096 blocks=130560, imaxpct=25 00:09:13.765 = sunit=0 swidth=0 blks 00:09:13.765 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:13.765 log =internal log bsize=4096 blocks=16384, version=2 00:09:13.765 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:13.765 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:13.765 Discarding blocks...Done. 00:09:13.765 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1934303 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:13.766 00:09:13.766 real 0m0.134s 00:09:13.766 user 0m0.017s 00:09:13.766 sys 0m0.052s 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:13.766 ************************************ 00:09:13.766 END TEST filesystem_xfs 00:09:13.766 ************************************ 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:13.766 13:37:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.148 13:37:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.148 13:37:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:09:15.148 13:37:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:15.148 13:37:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.148 13:37:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:15.148 13:37:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1934303 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1934303 ']' 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1934303 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:15.148 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1934303 00:09:15.410 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:15.410 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:15.410 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1934303' 00:09:15.410 killing process with pid 1934303 00:09:15.410 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 1934303 00:09:15.410 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 1934303 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:15.713 00:09:15.713 real 0m8.214s 00:09:15.713 user 0m32.151s 00:09:15.713 sys 0m0.898s 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.713 ************************************ 00:09:15.713 END TEST nvmf_filesystem_no_in_capsule 00:09:15.713 ************************************ 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:15.713 ************************************ 00:09:15.713 START TEST nvmf_filesystem_in_capsule 00:09:15.713 ************************************ 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1936202 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1936202 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1936202 ']' 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:15.713 13:37:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.713 [2024-06-11 13:37:08.501784] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:15.713 [2024-06-11 13:37:08.501843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.713 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.713 [2024-06-11 13:37:08.567093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.994 [2024-06-11 13:37:08.641618] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.994 [2024-06-11 13:37:08.641656] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.994 [2024-06-11 13:37:08.641668] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.994 [2024-06-11 13:37:08.641674] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.994 [2024-06-11 13:37:08.641680] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.994 [2024-06-11 13:37:08.641822] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.994 [2024-06-11 13:37:08.641942] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.994 [2024-06-11 13:37:08.642085] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.994 [2024-06-11 13:37:08.642085] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.564 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:16.564 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:09:16.564 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.564 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:16.564 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.564 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.564 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:16.564 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:09:16.564 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.564 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.564 [2024-06-11 13:37:09.363827] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x113ae90/0x113f380) succeed. 00:09:16.564 [2024-06-11 13:37:09.378411] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x113c4d0/0x1180a10) succeed. 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.825 Malloc1 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.825 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.826 [2024-06-11 13:37:09.609932] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:09:16.826 { 00:09:16.826 "name": "Malloc1", 00:09:16.826 "aliases": [ 00:09:16.826 "935c61ed-3f66-4a59-b905-15d75a3a4c26" 00:09:16.826 ], 00:09:16.826 "product_name": "Malloc disk", 00:09:16.826 "block_size": 512, 00:09:16.826 "num_blocks": 1048576, 00:09:16.826 "uuid": "935c61ed-3f66-4a59-b905-15d75a3a4c26", 00:09:16.826 "assigned_rate_limits": { 00:09:16.826 "rw_ios_per_sec": 0, 00:09:16.826 "rw_mbytes_per_sec": 0, 00:09:16.826 "r_mbytes_per_sec": 0, 00:09:16.826 "w_mbytes_per_sec": 0 00:09:16.826 }, 00:09:16.826 "claimed": true, 00:09:16.826 "claim_type": "exclusive_write", 00:09:16.826 "zoned": false, 00:09:16.826 "supported_io_types": { 00:09:16.826 "read": true, 00:09:16.826 "write": true, 00:09:16.826 "unmap": true, 00:09:16.826 "write_zeroes": true, 00:09:16.826 "flush": true, 00:09:16.826 "reset": true, 00:09:16.826 "compare": false, 00:09:16.826 "compare_and_write": false, 00:09:16.826 "abort": true, 00:09:16.826 "nvme_admin": false, 00:09:16.826 "nvme_io": false 00:09:16.826 }, 00:09:16.826 "memory_domains": [ 00:09:16.826 { 00:09:16.826 "dma_device_id": "system", 00:09:16.826 "dma_device_type": 1 00:09:16.826 }, 00:09:16.826 { 00:09:16.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.826 "dma_device_type": 2 00:09:16.826 } 00:09:16.826 ], 00:09:16.826 "driver_specific": {} 00:09:16.826 } 00:09:16.826 ]' 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:16.826 13:37:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:18.741 13:37:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.741 13:37:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:09:18.741 13:37:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.741 13:37:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:18.741 13:37:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:20.655 13:37:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:21.596 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.597 ************************************ 00:09:21.597 START TEST filesystem_in_capsule_ext4 00:09:21.597 ************************************ 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:21.597 mke2fs 1.46.5 (30-Dec-2021) 00:09:21.597 Discarding device blocks: 0/522240 done 00:09:21.597 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:21.597 Filesystem UUID: 2bcff9af-9523-48ee-8a6f-f557f7bc2afe 00:09:21.597 Superblock backups stored on blocks: 00:09:21.597 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:21.597 00:09:21.597 Allocating group tables: 0/64 done 00:09:21.597 Writing inode tables: 0/64 done 00:09:21.597 Creating journal (8192 blocks): done 00:09:21.597 Writing superblocks and filesystem accounting information: 0/64 done 00:09:21.597 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1936202 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:21.597 00:09:21.597 real 0m0.128s 00:09:21.597 user 0m0.018s 00:09:21.597 sys 0m0.048s 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:21.597 ************************************ 00:09:21.597 END TEST filesystem_in_capsule_ext4 00:09:21.597 ************************************ 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:21.597 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.858 ************************************ 00:09:21.858 START TEST filesystem_in_capsule_btrfs 00:09:21.858 ************************************ 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:21.858 btrfs-progs v6.6.2 00:09:21.858 See https://btrfs.readthedocs.io for more information. 00:09:21.858 00:09:21.858 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:21.858 NOTE: several default settings have changed in version 5.15, please make sure 00:09:21.858 this does not affect your deployments: 00:09:21.858 - DUP for metadata (-m dup) 00:09:21.858 - enabled no-holes (-O no-holes) 00:09:21.858 - enabled free-space-tree (-R free-space-tree) 00:09:21.858 00:09:21.858 Label: (null) 00:09:21.858 UUID: 2d0051d7-4b92-497c-a1fc-a1eb71406416 00:09:21.858 Node size: 16384 00:09:21.858 Sector size: 4096 00:09:21.858 Filesystem size: 510.00MiB 00:09:21.858 Block group profiles: 00:09:21.858 Data: single 8.00MiB 00:09:21.858 Metadata: DUP 32.00MiB 00:09:21.858 System: DUP 8.00MiB 00:09:21.858 SSD detected: yes 00:09:21.858 Zoned device: no 00:09:21.858 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:21.858 Runtime features: free-space-tree 00:09:21.858 Checksum: crc32c 00:09:21.858 Number of devices: 1 00:09:21.858 Devices: 00:09:21.858 ID SIZE PATH 00:09:21.858 1 510.00MiB /dev/nvme0n1p1 00:09:21.858 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1936202 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:21.858 00:09:21.858 real 0m0.137s 00:09:21.858 user 0m0.015s 00:09:21.858 sys 0m0.069s 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:21.858 ************************************ 00:09:21.858 END TEST filesystem_in_capsule_btrfs 00:09:21.858 ************************************ 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.858 ************************************ 00:09:21.858 START TEST filesystem_in_capsule_xfs 00:09:21.858 ************************************ 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:09:21.858 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:22.119 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:22.119 = sectsz=512 attr=2, projid32bit=1 00:09:22.119 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:22.119 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:22.119 data = bsize=4096 blocks=130560, imaxpct=25 00:09:22.119 = sunit=0 swidth=0 blks 00:09:22.119 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:22.119 log =internal log bsize=4096 blocks=16384, version=2 00:09:22.119 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:22.119 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:22.119 Discarding blocks...Done. 00:09:22.119 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:09:22.119 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:22.119 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:22.119 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:22.119 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:22.119 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:22.119 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:22.119 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:22.119 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1936202 00:09:22.120 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:22.120 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:22.120 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:22.120 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:22.120 00:09:22.120 real 0m0.144s 00:09:22.120 user 0m0.025s 00:09:22.120 sys 0m0.049s 00:09:22.120 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:22.120 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:22.120 ************************************ 00:09:22.120 END TEST filesystem_in_capsule_xfs 00:09:22.120 ************************************ 00:09:22.120 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:22.120 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:22.120 13:37:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1936202 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1936202 ']' 00:09:23.503 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1936202 00:09:23.504 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:09:23.504 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:23.504 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1936202 00:09:23.504 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:23.504 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:23.504 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1936202' 00:09:23.504 killing process with pid 1936202 00:09:23.504 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 1936202 00:09:23.504 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 1936202 00:09:23.763 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:23.763 00:09:23.763 real 0m8.219s 00:09:23.763 user 0m32.085s 00:09:23.763 sys 0m0.960s 00:09:23.763 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:23.763 13:37:16 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.763 ************************************ 00:09:23.763 END TEST nvmf_filesystem_in_capsule 00:09:23.763 ************************************ 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:24.024 rmmod nvme_rdma 00:09:24.024 rmmod nvme_fabrics 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:24.024 00:09:24.024 real 0m24.219s 00:09:24.024 user 1m6.605s 00:09:24.024 sys 0m7.392s 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:24.024 13:37:16 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.024 ************************************ 00:09:24.024 END TEST nvmf_filesystem 00:09:24.024 ************************************ 00:09:24.024 13:37:16 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:24.024 13:37:16 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:24.024 13:37:16 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:24.024 13:37:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:24.024 ************************************ 00:09:24.024 START TEST nvmf_target_discovery 00:09:24.024 ************************************ 00:09:24.024 13:37:16 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:24.024 * Looking for test storage... 00:09:24.024 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:24.024 13:37:16 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.285 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:24.286 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:24.286 13:37:16 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:24.286 13:37:16 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:32.425 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:32.425 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:32.425 Found net devices under 0000:98:00.0: mlx_0_0 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:32.425 Found net devices under 0000:98:00.1: mlx_0_1 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:32.425 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:32.426 13:37:23 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:32.426 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:32.426 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:32.426 altname enp152s0f0np0 00:09:32.426 altname ens817f0np0 00:09:32.426 inet 192.168.100.8/24 scope global mlx_0_0 00:09:32.426 valid_lft forever preferred_lft forever 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:32.426 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:32.426 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:32.426 altname enp152s0f1np1 00:09:32.426 altname ens817f1np1 00:09:32.426 inet 192.168.100.9/24 scope global mlx_0_1 00:09:32.426 valid_lft forever preferred_lft forever 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:32.426 192.168.100.9' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:32.426 192.168.100.9' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:32.426 192.168.100.9' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1941819 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1941819 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 1941819 ']' 00:09:32.426 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.427 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:32.427 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.427 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:32.427 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 [2024-06-11 13:37:24.190661] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:32.427 [2024-06-11 13:37:24.190712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.427 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.427 [2024-06-11 13:37:24.251599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.427 [2024-06-11 13:37:24.316667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.427 [2024-06-11 13:37:24.316706] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.427 [2024-06-11 13:37:24.316714] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.427 [2024-06-11 13:37:24.316720] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.427 [2024-06-11 13:37:24.316726] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.427 [2024-06-11 13:37:24.316864] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.427 [2024-06-11 13:37:24.316981] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.427 [2024-06-11 13:37:24.317121] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.427 [2024-06-11 13:37:24.317122] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.427 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:32.427 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:09:32.427 13:37:24 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.427 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:32.427 13:37:24 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 [2024-06-11 13:37:25.046788] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb0be90/0xb10380) succeed. 00:09:32.427 [2024-06-11 13:37:25.061153] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb0d4d0/0xb51a10) succeed. 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 Null1 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 [2024-06-11 13:37:25.236838] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 Null2 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 Null3 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.427 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.689 Null4 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 4420 00:09:32.689 00:09:32.689 Discovery Log Number of Records 6, Generation counter 6 00:09:32.689 =====Discovery Log Entry 0====== 00:09:32.689 trtype: rdma 00:09:32.689 adrfam: ipv4 00:09:32.689 subtype: current discovery subsystem 00:09:32.689 treq: not required 00:09:32.689 portid: 0 00:09:32.689 trsvcid: 4420 00:09:32.689 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:32.689 traddr: 192.168.100.8 00:09:32.689 eflags: explicit discovery connections, duplicate discovery information 00:09:32.689 rdma_prtype: not specified 00:09:32.689 rdma_qptype: connected 00:09:32.689 rdma_cms: rdma-cm 00:09:32.689 rdma_pkey: 0x0000 00:09:32.689 =====Discovery Log Entry 1====== 00:09:32.689 trtype: rdma 00:09:32.689 adrfam: ipv4 00:09:32.689 subtype: nvme subsystem 00:09:32.689 treq: not required 00:09:32.689 portid: 0 00:09:32.689 trsvcid: 4420 00:09:32.689 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:32.689 traddr: 192.168.100.8 00:09:32.689 eflags: none 00:09:32.689 rdma_prtype: not specified 00:09:32.689 rdma_qptype: connected 00:09:32.689 rdma_cms: rdma-cm 00:09:32.689 rdma_pkey: 0x0000 00:09:32.689 =====Discovery Log Entry 2====== 00:09:32.689 trtype: rdma 00:09:32.689 adrfam: ipv4 00:09:32.689 subtype: nvme subsystem 00:09:32.689 treq: not required 00:09:32.689 portid: 0 00:09:32.689 trsvcid: 4420 00:09:32.689 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:32.689 traddr: 192.168.100.8 00:09:32.689 eflags: none 00:09:32.689 rdma_prtype: not specified 00:09:32.689 rdma_qptype: connected 00:09:32.689 rdma_cms: rdma-cm 00:09:32.689 rdma_pkey: 0x0000 00:09:32.689 =====Discovery Log Entry 3====== 00:09:32.689 trtype: rdma 00:09:32.689 adrfam: ipv4 00:09:32.689 subtype: nvme subsystem 00:09:32.689 treq: not required 00:09:32.689 portid: 0 00:09:32.689 trsvcid: 4420 00:09:32.689 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:32.689 traddr: 192.168.100.8 00:09:32.689 eflags: none 00:09:32.689 rdma_prtype: not specified 00:09:32.689 rdma_qptype: connected 00:09:32.689 rdma_cms: rdma-cm 00:09:32.689 rdma_pkey: 0x0000 00:09:32.689 =====Discovery Log Entry 4====== 00:09:32.689 trtype: rdma 00:09:32.689 adrfam: ipv4 00:09:32.689 subtype: nvme subsystem 00:09:32.689 treq: not required 00:09:32.689 portid: 0 00:09:32.689 trsvcid: 4420 00:09:32.689 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:32.689 traddr: 192.168.100.8 00:09:32.689 eflags: none 00:09:32.689 rdma_prtype: not specified 00:09:32.689 rdma_qptype: connected 00:09:32.689 rdma_cms: rdma-cm 00:09:32.689 rdma_pkey: 0x0000 00:09:32.689 =====Discovery Log Entry 5====== 00:09:32.689 trtype: rdma 00:09:32.689 adrfam: ipv4 00:09:32.689 subtype: discovery subsystem referral 00:09:32.689 treq: not required 00:09:32.689 portid: 0 00:09:32.689 trsvcid: 4430 00:09:32.689 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:32.689 traddr: 192.168.100.8 00:09:32.689 eflags: none 00:09:32.689 rdma_prtype: unrecognized 00:09:32.689 rdma_qptype: unrecognized 00:09:32.689 rdma_cms: unrecognized 00:09:32.689 rdma_pkey: 0x0000 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:32.689 Perform nvmf subsystem discovery via RPC 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.689 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.689 [ 00:09:32.689 { 00:09:32.689 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:32.689 "subtype": "Discovery", 00:09:32.689 "listen_addresses": [ 00:09:32.689 { 00:09:32.689 "trtype": "RDMA", 00:09:32.689 "adrfam": "IPv4", 00:09:32.689 "traddr": "192.168.100.8", 00:09:32.689 "trsvcid": "4420" 00:09:32.689 } 00:09:32.689 ], 00:09:32.689 "allow_any_host": true, 00:09:32.689 "hosts": [] 00:09:32.689 }, 00:09:32.689 { 00:09:32.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.689 "subtype": "NVMe", 00:09:32.689 "listen_addresses": [ 00:09:32.689 { 00:09:32.689 "trtype": "RDMA", 00:09:32.689 "adrfam": "IPv4", 00:09:32.689 "traddr": "192.168.100.8", 00:09:32.689 "trsvcid": "4420" 00:09:32.689 } 00:09:32.689 ], 00:09:32.690 "allow_any_host": true, 00:09:32.690 "hosts": [], 00:09:32.690 "serial_number": "SPDK00000000000001", 00:09:32.690 "model_number": "SPDK bdev Controller", 00:09:32.690 "max_namespaces": 32, 00:09:32.690 "min_cntlid": 1, 00:09:32.690 "max_cntlid": 65519, 00:09:32.690 "namespaces": [ 00:09:32.690 { 00:09:32.690 "nsid": 1, 00:09:32.690 "bdev_name": "Null1", 00:09:32.690 "name": "Null1", 00:09:32.690 "nguid": "5517C6D011B04FD8879848ACA8307A8A", 00:09:32.690 "uuid": "5517c6d0-11b0-4fd8-8798-48aca8307a8a" 00:09:32.690 } 00:09:32.690 ] 00:09:32.690 }, 00:09:32.690 { 00:09:32.690 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:32.690 "subtype": "NVMe", 00:09:32.690 "listen_addresses": [ 00:09:32.690 { 00:09:32.690 "trtype": "RDMA", 00:09:32.690 "adrfam": "IPv4", 00:09:32.690 "traddr": "192.168.100.8", 00:09:32.690 "trsvcid": "4420" 00:09:32.690 } 00:09:32.690 ], 00:09:32.690 "allow_any_host": true, 00:09:32.690 "hosts": [], 00:09:32.690 "serial_number": "SPDK00000000000002", 00:09:32.690 "model_number": "SPDK bdev Controller", 00:09:32.690 "max_namespaces": 32, 00:09:32.690 "min_cntlid": 1, 00:09:32.690 "max_cntlid": 65519, 00:09:32.690 "namespaces": [ 00:09:32.690 { 00:09:32.690 "nsid": 1, 00:09:32.690 "bdev_name": "Null2", 00:09:32.690 "name": "Null2", 00:09:32.690 "nguid": "35E432F62C2146B3809E6660DCE3E7D6", 00:09:32.690 "uuid": "35e432f6-2c21-46b3-809e-6660dce3e7d6" 00:09:32.690 } 00:09:32.690 ] 00:09:32.690 }, 00:09:32.690 { 00:09:32.690 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:32.690 "subtype": "NVMe", 00:09:32.690 "listen_addresses": [ 00:09:32.690 { 00:09:32.690 "trtype": "RDMA", 00:09:32.690 "adrfam": "IPv4", 00:09:32.690 "traddr": "192.168.100.8", 00:09:32.690 "trsvcid": "4420" 00:09:32.690 } 00:09:32.690 ], 00:09:32.690 "allow_any_host": true, 00:09:32.690 "hosts": [], 00:09:32.690 "serial_number": "SPDK00000000000003", 00:09:32.690 "model_number": "SPDK bdev Controller", 00:09:32.690 "max_namespaces": 32, 00:09:32.690 "min_cntlid": 1, 00:09:32.690 "max_cntlid": 65519, 00:09:32.690 "namespaces": [ 00:09:32.690 { 00:09:32.690 "nsid": 1, 00:09:32.690 "bdev_name": "Null3", 00:09:32.690 "name": "Null3", 00:09:32.690 "nguid": "278C1B6E07C14C88B7EAE67B7B871EA8", 00:09:32.690 "uuid": "278c1b6e-07c1-4c88-b7ea-e67b7b871ea8" 00:09:32.690 } 00:09:32.690 ] 00:09:32.690 }, 00:09:32.690 { 00:09:32.690 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:32.690 "subtype": "NVMe", 00:09:32.690 "listen_addresses": [ 00:09:32.690 { 00:09:32.690 "trtype": "RDMA", 00:09:32.690 "adrfam": "IPv4", 00:09:32.690 "traddr": "192.168.100.8", 00:09:32.690 "trsvcid": "4420" 00:09:32.690 } 00:09:32.690 ], 00:09:32.690 "allow_any_host": true, 00:09:32.690 "hosts": [], 00:09:32.690 "serial_number": "SPDK00000000000004", 00:09:32.690 "model_number": "SPDK bdev Controller", 00:09:32.690 "max_namespaces": 32, 00:09:32.690 "min_cntlid": 1, 00:09:32.690 "max_cntlid": 65519, 00:09:32.690 "namespaces": [ 00:09:32.690 { 00:09:32.690 "nsid": 1, 00:09:32.690 "bdev_name": "Null4", 00:09:32.690 "name": "Null4", 00:09:32.690 "nguid": "37ECE9FD1C73498D80DC5439EC1F44F4", 00:09:32.690 "uuid": "37ece9fd-1c73-498d-80dc-5439ec1f44f4" 00:09:32.690 } 00:09:32.690 ] 00:09:32.690 } 00:09:32.690 ] 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.690 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:32.952 rmmod nvme_rdma 00:09:32.952 rmmod nvme_fabrics 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1941819 ']' 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1941819 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 1941819 ']' 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 1941819 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1941819 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1941819' 00:09:32.952 killing process with pid 1941819 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 1941819 00:09:32.952 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 1941819 00:09:33.214 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.214 13:37:25 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:33.214 00:09:33.214 real 0m9.135s 00:09:33.215 user 0m8.797s 00:09:33.215 sys 0m5.617s 00:09:33.215 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:33.215 13:37:25 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:33.215 ************************************ 00:09:33.215 END TEST nvmf_target_discovery 00:09:33.215 ************************************ 00:09:33.215 13:37:26 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:33.215 13:37:26 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:33.215 13:37:26 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:33.215 13:37:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:33.215 ************************************ 00:09:33.215 START TEST nvmf_referrals 00:09:33.215 ************************************ 00:09:33.215 13:37:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:33.476 * Looking for test storage... 00:09:33.476 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:33.476 13:37:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.476 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:33.476 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.476 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.476 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.476 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.476 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.476 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.476 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.477 13:37:26 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.066 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:40.067 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:40.067 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:40.067 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:40.328 Found net devices under 0000:98:00.0: mlx_0_0 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:40.328 Found net devices under 0000:98:00.1: mlx_0_1 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:40.328 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:40.329 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:40.329 13:37:32 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:40.329 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:40.329 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:40.329 altname enp152s0f0np0 00:09:40.329 altname ens817f0np0 00:09:40.329 inet 192.168.100.8/24 scope global mlx_0_0 00:09:40.329 valid_lft forever preferred_lft forever 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:40.329 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:40.329 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:40.329 altname enp152s0f1np1 00:09:40.329 altname ens817f1np1 00:09:40.329 inet 192.168.100.9/24 scope global mlx_0_1 00:09:40.329 valid_lft forever preferred_lft forever 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:40.329 192.168.100.9' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:40.329 192.168.100.9' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:40.329 192.168.100.9' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1945906 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1945906 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 1945906 ']' 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:40.329 13:37:33 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.590 [2024-06-11 13:37:33.261866] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:40.590 [2024-06-11 13:37:33.261915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.590 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.590 [2024-06-11 13:37:33.322501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.590 [2024-06-11 13:37:33.386816] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.590 [2024-06-11 13:37:33.386855] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.590 [2024-06-11 13:37:33.386862] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.590 [2024-06-11 13:37:33.386874] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.590 [2024-06-11 13:37:33.386879] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.590 [2024-06-11 13:37:33.387041] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.590 [2024-06-11 13:37:33.387129] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.590 [2024-06-11 13:37:33.387432] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.590 [2024-06-11 13:37:33.387433] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.163 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:41.163 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:09:41.163 13:37:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.163 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:41.163 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.163 13:37:34 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.163 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:41.163 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.163 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.425 [2024-06-11 13:37:34.108432] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10f2e90/0x10f7380) succeed. 00:09:41.425 [2024-06-11 13:37:34.122941] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10f44d0/0x1138a10) succeed. 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.425 [2024-06-11 13:37:34.249889] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.425 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.686 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:41.687 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.687 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.946 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.947 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:41.947 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:41.947 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:41.947 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:41.947 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:41.947 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:41.947 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:42.207 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:42.207 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:42.207 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:42.207 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:42.207 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.207 13:37:34 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:42.207 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:42.208 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:42.208 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:42.208 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:42.208 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.208 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:42.208 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.468 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.469 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:42.469 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:42.469 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:42.469 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.469 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:42.729 rmmod nvme_rdma 00:09:42.729 rmmod nvme_fabrics 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:42.729 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1945906 ']' 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1945906 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 1945906 ']' 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 1945906 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1945906 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1945906' 00:09:42.730 killing process with pid 1945906 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 1945906 00:09:42.730 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 1945906 00:09:42.989 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.990 13:37:35 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:42.990 00:09:42.990 real 0m9.762s 00:09:42.990 user 0m12.434s 00:09:42.990 sys 0m5.769s 00:09:42.990 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:42.990 13:37:35 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.990 ************************************ 00:09:42.990 END TEST nvmf_referrals 00:09:42.990 ************************************ 00:09:42.990 13:37:35 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:42.990 13:37:35 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:42.990 13:37:35 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:42.990 13:37:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:42.990 ************************************ 00:09:42.990 START TEST nvmf_connect_disconnect 00:09:42.990 ************************************ 00:09:42.990 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:43.251 * Looking for test storage... 00:09:43.251 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.251 13:37:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:43.251 13:37:36 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:51.388 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:51.388 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:51.389 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:51.389 Found net devices under 0000:98:00.0: mlx_0_0 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:51.389 Found net devices under 0000:98:00.1: mlx_0_1 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:51.389 13:37:42 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:51.389 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:51.389 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:51.389 altname enp152s0f0np0 00:09:51.389 altname ens817f0np0 00:09:51.389 inet 192.168.100.8/24 scope global mlx_0_0 00:09:51.389 valid_lft forever preferred_lft forever 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:51.389 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:51.389 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:51.389 altname enp152s0f1np1 00:09:51.389 altname ens817f1np1 00:09:51.389 inet 192.168.100.9/24 scope global mlx_0_1 00:09:51.389 valid_lft forever preferred_lft forever 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:51.389 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:51.390 192.168.100.9' 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:51.390 192.168.100.9' 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:51.390 192.168.100.9' 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1950381 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1950381 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 1950381 ']' 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.390 [2024-06-11 13:37:43.186111] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:51.390 [2024-06-11 13:37:43.186179] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.390 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.390 [2024-06-11 13:37:43.252964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.390 [2024-06-11 13:37:43.328316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.390 [2024-06-11 13:37:43.328354] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.390 [2024-06-11 13:37:43.328361] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.390 [2024-06-11 13:37:43.328368] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.390 [2024-06-11 13:37:43.328373] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.390 [2024-06-11 13:37:43.328513] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.390 [2024-06-11 13:37:43.328635] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.390 [2024-06-11 13:37:43.328792] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.390 [2024-06-11 13:37:43.328793] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:51.390 13:37:43 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.390 [2024-06-11 13:37:44.014626] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:51.390 [2024-06-11 13:37:44.045903] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20c3e90/0x20c8380) succeed. 00:09:51.390 [2024-06-11 13:37:44.060236] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20c54d0/0x2109a10) succeed. 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:51.390 [2024-06-11 13:37:44.218010] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:51.390 13:37:44 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:55.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:41.749 rmmod nvme_rdma 00:15:41.749 rmmod nvme_fabrics 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1950381 ']' 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1950381 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1950381 ']' 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 1950381 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1950381 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1950381' 00:15:41.749 killing process with pid 1950381 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 1950381 00:15:41.749 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 1950381 00:15:42.011 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:42.011 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:42.011 00:15:42.011 real 5m58.818s 00:15:42.011 user 23m21.121s 00:15:42.011 sys 0m14.904s 00:15:42.011 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:42.011 13:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:42.011 ************************************ 00:15:42.011 END TEST nvmf_connect_disconnect 00:15:42.011 ************************************ 00:15:42.011 13:43:34 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:15:42.011 13:43:34 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:42.011 13:43:34 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:42.011 13:43:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:42.011 ************************************ 00:15:42.011 START TEST nvmf_multitarget 00:15:42.011 ************************************ 00:15:42.011 13:43:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:15:42.011 * Looking for test storage... 00:15:42.011 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:42.012 13:43:34 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.150 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:50.151 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:50.151 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:50.151 Found net devices under 0000:98:00.0: mlx_0_0 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:50.151 Found net devices under 0000:98:00.1: mlx_0_1 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:50.151 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:50.151 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:50.151 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:15:50.151 altname enp152s0f0np0 00:15:50.151 altname ens817f0np0 00:15:50.151 inet 192.168.100.8/24 scope global mlx_0_0 00:15:50.151 valid_lft forever preferred_lft forever 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:50.152 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:50.152 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:15:50.152 altname enp152s0f1np1 00:15:50.152 altname ens817f1np1 00:15:50.152 inet 192.168.100.9/24 scope global mlx_0_1 00:15:50.152 valid_lft forever preferred_lft forever 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:50.152 192.168.100.9' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:50.152 192.168.100.9' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:50.152 192.168.100.9' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2025281 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2025281 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 2025281 ']' 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:50.152 13:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:50.152 [2024-06-11 13:43:42.011925] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:15:50.152 [2024-06-11 13:43:42.011974] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.152 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.153 [2024-06-11 13:43:42.072516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.153 [2024-06-11 13:43:42.138053] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.153 [2024-06-11 13:43:42.138090] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.153 [2024-06-11 13:43:42.138097] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.153 [2024-06-11 13:43:42.138104] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.153 [2024-06-11 13:43:42.138110] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.153 [2024-06-11 13:43:42.142032] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.153 [2024-06-11 13:43:42.142075] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.153 [2024-06-11 13:43:42.142237] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.153 [2024-06-11 13:43:42.142237] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.153 13:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:50.153 13:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:15:50.153 13:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:50.153 13:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:50.153 13:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:50.153 13:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.153 13:43:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:50.153 13:43:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:50.153 13:43:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:50.153 13:43:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:50.153 13:43:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:50.153 "nvmf_tgt_1" 00:15:50.153 13:43:43 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:50.413 "nvmf_tgt_2" 00:15:50.413 13:43:43 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:50.413 13:43:43 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:50.413 13:43:43 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:50.413 13:43:43 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:50.413 true 00:15:50.413 13:43:43 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:50.675 true 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:50.675 rmmod nvme_rdma 00:15:50.675 rmmod nvme_fabrics 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2025281 ']' 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2025281 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 2025281 ']' 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 2025281 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:50.675 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2025281 00:15:50.935 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:50.935 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:50.935 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2025281' 00:15:50.935 killing process with pid 2025281 00:15:50.935 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 2025281 00:15:50.935 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 2025281 00:15:50.935 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:50.935 13:43:43 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:50.935 00:15:50.935 real 0m8.968s 00:15:50.935 user 0m9.177s 00:15:50.935 sys 0m5.646s 00:15:50.935 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:50.935 13:43:43 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:50.935 ************************************ 00:15:50.935 END TEST nvmf_multitarget 00:15:50.935 ************************************ 00:15:50.935 13:43:43 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:15:50.935 13:43:43 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:50.935 13:43:43 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:50.935 13:43:43 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:50.935 ************************************ 00:15:50.935 START TEST nvmf_rpc 00:15:50.935 ************************************ 00:15:50.935 13:43:43 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:15:51.196 * Looking for test storage... 00:15:51.196 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:51.196 13:43:43 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:59.332 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:59.332 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:59.332 Found net devices under 0000:98:00.0: mlx_0_0 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:59.332 Found net devices under 0000:98:00.1: mlx_0_1 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:59.332 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:59.333 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:59.333 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:15:59.333 altname enp152s0f0np0 00:15:59.333 altname ens817f0np0 00:15:59.333 inet 192.168.100.8/24 scope global mlx_0_0 00:15:59.333 valid_lft forever preferred_lft forever 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:59.333 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:59.333 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:15:59.333 altname enp152s0f1np1 00:15:59.333 altname ens817f1np1 00:15:59.333 inet 192.168.100.9/24 scope global mlx_0_1 00:15:59.333 valid_lft forever preferred_lft forever 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:59.333 192.168.100.9' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:59.333 192.168.100.9' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:59.333 192.168.100.9' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:59.333 13:43:50 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2029370 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2029370 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 2029370 ']' 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:59.333 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.333 [2024-06-11 13:43:51.060854] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:15:59.333 [2024-06-11 13:43:51.060924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.333 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.333 [2024-06-11 13:43:51.129748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:59.333 [2024-06-11 13:43:51.196422] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.333 [2024-06-11 13:43:51.196460] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.333 [2024-06-11 13:43:51.196468] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.333 [2024-06-11 13:43:51.196474] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.333 [2024-06-11 13:43:51.196480] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.333 [2024-06-11 13:43:51.196615] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.333 [2024-06-11 13:43:51.196741] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.333 [2024-06-11 13:43:51.196897] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.333 [2024-06-11 13:43:51.196898] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:59.334 "tick_rate": 2400000000, 00:15:59.334 "poll_groups": [ 00:15:59.334 { 00:15:59.334 "name": "nvmf_tgt_poll_group_000", 00:15:59.334 "admin_qpairs": 0, 00:15:59.334 "io_qpairs": 0, 00:15:59.334 "current_admin_qpairs": 0, 00:15:59.334 "current_io_qpairs": 0, 00:15:59.334 "pending_bdev_io": 0, 00:15:59.334 "completed_nvme_io": 0, 00:15:59.334 "transports": [] 00:15:59.334 }, 00:15:59.334 { 00:15:59.334 "name": "nvmf_tgt_poll_group_001", 00:15:59.334 "admin_qpairs": 0, 00:15:59.334 "io_qpairs": 0, 00:15:59.334 "current_admin_qpairs": 0, 00:15:59.334 "current_io_qpairs": 0, 00:15:59.334 "pending_bdev_io": 0, 00:15:59.334 "completed_nvme_io": 0, 00:15:59.334 "transports": [] 00:15:59.334 }, 00:15:59.334 { 00:15:59.334 "name": "nvmf_tgt_poll_group_002", 00:15:59.334 "admin_qpairs": 0, 00:15:59.334 "io_qpairs": 0, 00:15:59.334 "current_admin_qpairs": 0, 00:15:59.334 "current_io_qpairs": 0, 00:15:59.334 "pending_bdev_io": 0, 00:15:59.334 "completed_nvme_io": 0, 00:15:59.334 "transports": [] 00:15:59.334 }, 00:15:59.334 { 00:15:59.334 "name": "nvmf_tgt_poll_group_003", 00:15:59.334 "admin_qpairs": 0, 00:15:59.334 "io_qpairs": 0, 00:15:59.334 "current_admin_qpairs": 0, 00:15:59.334 "current_io_qpairs": 0, 00:15:59.334 "pending_bdev_io": 0, 00:15:59.334 "completed_nvme_io": 0, 00:15:59.334 "transports": [] 00:15:59.334 } 00:15:59.334 ] 00:15:59.334 }' 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.334 [2024-06-11 13:43:51.490812] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e57ed0/0x1e5c3c0) succeed. 00:15:59.334 [2024-06-11 13:43:51.505449] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e59510/0x1e9da50) succeed. 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.334 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:59.334 "tick_rate": 2400000000, 00:15:59.334 "poll_groups": [ 00:15:59.334 { 00:15:59.334 "name": "nvmf_tgt_poll_group_000", 00:15:59.334 "admin_qpairs": 0, 00:15:59.334 "io_qpairs": 0, 00:15:59.334 "current_admin_qpairs": 0, 00:15:59.334 "current_io_qpairs": 0, 00:15:59.334 "pending_bdev_io": 0, 00:15:59.334 "completed_nvme_io": 0, 00:15:59.334 "transports": [ 00:15:59.334 { 00:15:59.334 "trtype": "RDMA", 00:15:59.334 "pending_data_buffer": 0, 00:15:59.334 "devices": [ 00:15:59.334 { 00:15:59.334 "name": "mlx5_0", 00:15:59.334 "polls": 15502, 00:15:59.334 "idle_polls": 15502, 00:15:59.334 "completions": 0, 00:15:59.334 "requests": 0, 00:15:59.334 "request_latency": 0, 00:15:59.334 "pending_free_request": 0, 00:15:59.334 "pending_rdma_read": 0, 00:15:59.334 "pending_rdma_write": 0, 00:15:59.334 "pending_rdma_send": 0, 00:15:59.334 "total_send_wrs": 0, 00:15:59.334 "send_doorbell_updates": 0, 00:15:59.334 "total_recv_wrs": 4096, 00:15:59.334 "recv_doorbell_updates": 1 00:15:59.334 }, 00:15:59.334 { 00:15:59.334 "name": "mlx5_1", 00:15:59.334 "polls": 15502, 00:15:59.334 "idle_polls": 15502, 00:15:59.334 "completions": 0, 00:15:59.334 "requests": 0, 00:15:59.334 "request_latency": 0, 00:15:59.334 "pending_free_request": 0, 00:15:59.334 "pending_rdma_read": 0, 00:15:59.334 "pending_rdma_write": 0, 00:15:59.334 "pending_rdma_send": 0, 00:15:59.334 "total_send_wrs": 0, 00:15:59.334 "send_doorbell_updates": 0, 00:15:59.334 "total_recv_wrs": 4096, 00:15:59.334 "recv_doorbell_updates": 1 00:15:59.334 } 00:15:59.334 ] 00:15:59.334 } 00:15:59.334 ] 00:15:59.334 }, 00:15:59.334 { 00:15:59.334 "name": "nvmf_tgt_poll_group_001", 00:15:59.334 "admin_qpairs": 0, 00:15:59.334 "io_qpairs": 0, 00:15:59.334 "current_admin_qpairs": 0, 00:15:59.334 "current_io_qpairs": 0, 00:15:59.334 "pending_bdev_io": 0, 00:15:59.334 "completed_nvme_io": 0, 00:15:59.334 "transports": [ 00:15:59.334 { 00:15:59.334 "trtype": "RDMA", 00:15:59.334 "pending_data_buffer": 0, 00:15:59.334 "devices": [ 00:15:59.334 { 00:15:59.334 "name": "mlx5_0", 00:15:59.334 "polls": 15735, 00:15:59.334 "idle_polls": 15735, 00:15:59.334 "completions": 0, 00:15:59.334 "requests": 0, 00:15:59.334 "request_latency": 0, 00:15:59.334 "pending_free_request": 0, 00:15:59.334 "pending_rdma_read": 0, 00:15:59.334 "pending_rdma_write": 0, 00:15:59.334 "pending_rdma_send": 0, 00:15:59.334 "total_send_wrs": 0, 00:15:59.334 "send_doorbell_updates": 0, 00:15:59.334 "total_recv_wrs": 4096, 00:15:59.334 "recv_doorbell_updates": 1 00:15:59.334 }, 00:15:59.334 { 00:15:59.334 "name": "mlx5_1", 00:15:59.334 "polls": 15735, 00:15:59.334 "idle_polls": 15735, 00:15:59.334 "completions": 0, 00:15:59.334 "requests": 0, 00:15:59.334 "request_latency": 0, 00:15:59.334 "pending_free_request": 0, 00:15:59.334 "pending_rdma_read": 0, 00:15:59.334 "pending_rdma_write": 0, 00:15:59.334 "pending_rdma_send": 0, 00:15:59.334 "total_send_wrs": 0, 00:15:59.334 "send_doorbell_updates": 0, 00:15:59.334 "total_recv_wrs": 4096, 00:15:59.334 "recv_doorbell_updates": 1 00:15:59.334 } 00:15:59.334 ] 00:15:59.334 } 00:15:59.334 ] 00:15:59.334 }, 00:15:59.334 { 00:15:59.334 "name": "nvmf_tgt_poll_group_002", 00:15:59.334 "admin_qpairs": 0, 00:15:59.334 "io_qpairs": 0, 00:15:59.334 "current_admin_qpairs": 0, 00:15:59.334 "current_io_qpairs": 0, 00:15:59.334 "pending_bdev_io": 0, 00:15:59.334 "completed_nvme_io": 0, 00:15:59.334 "transports": [ 00:15:59.334 { 00:15:59.334 "trtype": "RDMA", 00:15:59.334 "pending_data_buffer": 0, 00:15:59.334 "devices": [ 00:15:59.334 { 00:15:59.334 "name": "mlx5_0", 00:15:59.334 "polls": 5416, 00:15:59.334 "idle_polls": 5416, 00:15:59.334 "completions": 0, 00:15:59.334 "requests": 0, 00:15:59.334 "request_latency": 0, 00:15:59.334 "pending_free_request": 0, 00:15:59.334 "pending_rdma_read": 0, 00:15:59.334 "pending_rdma_write": 0, 00:15:59.334 "pending_rdma_send": 0, 00:15:59.334 "total_send_wrs": 0, 00:15:59.334 "send_doorbell_updates": 0, 00:15:59.334 "total_recv_wrs": 4096, 00:15:59.334 "recv_doorbell_updates": 1 00:15:59.334 }, 00:15:59.334 { 00:15:59.334 "name": "mlx5_1", 00:15:59.334 "polls": 5416, 00:15:59.334 "idle_polls": 5416, 00:15:59.334 "completions": 0, 00:15:59.334 "requests": 0, 00:15:59.334 "request_latency": 0, 00:15:59.334 "pending_free_request": 0, 00:15:59.334 "pending_rdma_read": 0, 00:15:59.334 "pending_rdma_write": 0, 00:15:59.334 "pending_rdma_send": 0, 00:15:59.334 "total_send_wrs": 0, 00:15:59.334 "send_doorbell_updates": 0, 00:15:59.334 "total_recv_wrs": 4096, 00:15:59.334 "recv_doorbell_updates": 1 00:15:59.334 } 00:15:59.334 ] 00:15:59.334 } 00:15:59.334 ] 00:15:59.334 }, 00:15:59.334 { 00:15:59.334 "name": "nvmf_tgt_poll_group_003", 00:15:59.334 "admin_qpairs": 0, 00:15:59.334 "io_qpairs": 0, 00:15:59.334 "current_admin_qpairs": 0, 00:15:59.334 "current_io_qpairs": 0, 00:15:59.334 "pending_bdev_io": 0, 00:15:59.334 "completed_nvme_io": 0, 00:15:59.334 "transports": [ 00:15:59.334 { 00:15:59.334 "trtype": "RDMA", 00:15:59.334 "pending_data_buffer": 0, 00:15:59.334 "devices": [ 00:15:59.334 { 00:15:59.334 "name": "mlx5_0", 00:15:59.334 "polls": 837, 00:15:59.334 "idle_polls": 837, 00:15:59.334 "completions": 0, 00:15:59.334 "requests": 0, 00:15:59.334 "request_latency": 0, 00:15:59.334 "pending_free_request": 0, 00:15:59.334 "pending_rdma_read": 0, 00:15:59.334 "pending_rdma_write": 0, 00:15:59.334 "pending_rdma_send": 0, 00:15:59.334 "total_send_wrs": 0, 00:15:59.334 "send_doorbell_updates": 0, 00:15:59.334 "total_recv_wrs": 4096, 00:15:59.334 "recv_doorbell_updates": 1 00:15:59.334 }, 00:15:59.334 { 00:15:59.334 "name": "mlx5_1", 00:15:59.334 "polls": 837, 00:15:59.334 "idle_polls": 837, 00:15:59.335 "completions": 0, 00:15:59.335 "requests": 0, 00:15:59.335 "request_latency": 0, 00:15:59.335 "pending_free_request": 0, 00:15:59.335 "pending_rdma_read": 0, 00:15:59.335 "pending_rdma_write": 0, 00:15:59.335 "pending_rdma_send": 0, 00:15:59.335 "total_send_wrs": 0, 00:15:59.335 "send_doorbell_updates": 0, 00:15:59.335 "total_recv_wrs": 4096, 00:15:59.335 "recv_doorbell_updates": 1 00:15:59.335 } 00:15:59.335 ] 00:15:59.335 } 00:15:59.335 ] 00:15:59.335 } 00:15:59.335 ] 00:15:59.335 }' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 Malloc1 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 [2024-06-11 13:43:51.954275] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:15:59.335 13:43:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:15:59.335 [2024-06-11 13:43:52.009899] ctrlr.c: 820:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:15:59.335 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:59.335 could not add new controller: failed to write to nvme-fabrics device 00:15:59.335 13:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:15:59.335 13:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:59.335 13:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:59.335 13:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:59.335 13:43:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:59.335 13:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:59.335 13:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.335 13:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:59.335 13:43:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:00.718 13:43:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:00.718 13:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:16:00.718 13:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.718 13:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:00.718 13:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:16:02.629 13:43:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:02.629 13:43:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:02.629 13:43:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:02.629 13:43:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:02.629 13:43:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.629 13:43:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:16:02.629 13:43:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:04.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.012 13:43:56 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:04.012 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:16:04.012 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:04.012 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.012 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:04.012 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.012 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:16:04.012 13:43:56 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:04.012 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:04.012 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.012 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:04.013 [2024-06-11 13:43:56.863890] ctrlr.c: 820:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:16:04.013 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:04.013 could not add new controller: failed to write to nvme-fabrics device 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:04.013 13:43:56 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:05.398 13:43:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:05.398 13:43:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:16:05.398 13:43:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.398 13:43:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:05.398 13:43:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:16:07.942 13:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:07.942 13:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:07.942 13:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:07.942 13:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:07.942 13:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.942 13:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:16:07.942 13:44:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:08.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:08.880 13:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.881 [2024-06-11 13:44:01.645061] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:08.881 13:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:10.264 13:44:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:10.264 13:44:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:16:10.264 13:44:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.264 13:44:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:10.264 13:44:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:16:12.176 13:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:12.176 13:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:12.176 13:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:12.176 13:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:12.176 13:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.176 13:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:16:12.176 13:44:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.559 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.819 [2024-06-11 13:44:06.500920] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:13.819 13:44:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:15.201 13:44:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:15.201 13:44:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:16:15.201 13:44:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.201 13:44:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:15.201 13:44:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:16:17.129 13:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:17.129 13:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:17.129 13:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:17.129 13:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:17.129 13:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.129 13:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:16:17.129 13:44:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:18.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.558 [2024-06-11 13:44:11.312073] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:18.558 13:44:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:19.940 13:44:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:19.940 13:44:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:16:19.940 13:44:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.940 13:44:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:19.940 13:44:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:16:21.853 13:44:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:21.853 13:44:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:21.853 13:44:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.114 13:44:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:22.114 13:44:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.114 13:44:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:16:22.114 13:44:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:23.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.056 13:44:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:23.056 13:44:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:16:23.056 13:44:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:23.056 13:44:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.315 13:44:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:23.315 13:44:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:23.315 13:44:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:16:23.315 13:44:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:23.315 13:44:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.315 13:44:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.315 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.315 13:44:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.315 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.316 [2024-06-11 13:44:16.038100] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.316 13:44:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:24.697 13:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:24.697 13:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:16:24.697 13:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.697 13:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:24.697 13:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:16:26.608 13:44:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:26.609 13:44:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:26.609 13:44:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:26.868 13:44:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:26.868 13:44:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.868 13:44:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:16:26.868 13:44:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.807 13:44:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.807 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:16:27.807 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:27.807 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.808 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:27.808 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.808 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:16:27.808 13:44:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:27.808 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:27.808 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.808 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:27.808 13:44:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.808 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:27.808 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.068 [2024-06-11 13:44:20.747434] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:28.068 13:44:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:29.449 13:44:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:29.449 13:44:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:16:29.449 13:44:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.449 13:44:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:29.449 13:44:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:16:31.358 13:44:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:31.358 13:44:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:31.358 13:44:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.358 13:44:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:31.358 13:44:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.358 13:44:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:16:31.358 13:44:24 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.741 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:32.741 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:16:32.741 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:32.741 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.741 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.742 [2024-06-11 13:44:25.618165] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:32.742 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.003 [2024-06-11 13:44:25.678287] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.003 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 [2024-06-11 13:44:25.738551] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 [2024-06-11 13:44:25.794727] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 [2024-06-11 13:44:25.854942] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:33.004 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.265 13:44:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:33.265 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:33.265 "tick_rate": 2400000000, 00:16:33.265 "poll_groups": [ 00:16:33.265 { 00:16:33.265 "name": "nvmf_tgt_poll_group_000", 00:16:33.265 "admin_qpairs": 2, 00:16:33.265 "io_qpairs": 27, 00:16:33.265 "current_admin_qpairs": 0, 00:16:33.265 "current_io_qpairs": 0, 00:16:33.265 "pending_bdev_io": 0, 00:16:33.265 "completed_nvme_io": 78, 00:16:33.265 "transports": [ 00:16:33.265 { 00:16:33.265 "trtype": "RDMA", 00:16:33.265 "pending_data_buffer": 0, 00:16:33.265 "devices": [ 00:16:33.265 { 00:16:33.265 "name": "mlx5_0", 00:16:33.265 "polls": 4812075, 00:16:33.265 "idle_polls": 4811834, 00:16:33.265 "completions": 263, 00:16:33.265 "requests": 131, 00:16:33.265 "request_latency": 18484786, 00:16:33.265 "pending_free_request": 0, 00:16:33.265 "pending_rdma_read": 0, 00:16:33.265 "pending_rdma_write": 0, 00:16:33.265 "pending_rdma_send": 0, 00:16:33.265 "total_send_wrs": 207, 00:16:33.265 "send_doorbell_updates": 119, 00:16:33.265 "total_recv_wrs": 4227, 00:16:33.265 "recv_doorbell_updates": 119 00:16:33.265 }, 00:16:33.265 { 00:16:33.265 "name": "mlx5_1", 00:16:33.265 "polls": 4812075, 00:16:33.265 "idle_polls": 4812075, 00:16:33.265 "completions": 0, 00:16:33.265 "requests": 0, 00:16:33.265 "request_latency": 0, 00:16:33.265 "pending_free_request": 0, 00:16:33.265 "pending_rdma_read": 0, 00:16:33.265 "pending_rdma_write": 0, 00:16:33.265 "pending_rdma_send": 0, 00:16:33.265 "total_send_wrs": 0, 00:16:33.265 "send_doorbell_updates": 0, 00:16:33.265 "total_recv_wrs": 4096, 00:16:33.265 "recv_doorbell_updates": 1 00:16:33.265 } 00:16:33.265 ] 00:16:33.265 } 00:16:33.265 ] 00:16:33.265 }, 00:16:33.265 { 00:16:33.265 "name": "nvmf_tgt_poll_group_001", 00:16:33.265 "admin_qpairs": 2, 00:16:33.265 "io_qpairs": 26, 00:16:33.265 "current_admin_qpairs": 0, 00:16:33.265 "current_io_qpairs": 0, 00:16:33.265 "pending_bdev_io": 0, 00:16:33.265 "completed_nvme_io": 126, 00:16:33.265 "transports": [ 00:16:33.265 { 00:16:33.265 "trtype": "RDMA", 00:16:33.265 "pending_data_buffer": 0, 00:16:33.265 "devices": [ 00:16:33.265 { 00:16:33.265 "name": "mlx5_0", 00:16:33.265 "polls": 4921005, 00:16:33.265 "idle_polls": 4920688, 00:16:33.265 "completions": 358, 00:16:33.265 "requests": 179, 00:16:33.265 "request_latency": 29986486, 00:16:33.265 "pending_free_request": 0, 00:16:33.265 "pending_rdma_read": 0, 00:16:33.265 "pending_rdma_write": 0, 00:16:33.265 "pending_rdma_send": 0, 00:16:33.265 "total_send_wrs": 304, 00:16:33.265 "send_doorbell_updates": 154, 00:16:33.265 "total_recv_wrs": 4275, 00:16:33.265 "recv_doorbell_updates": 155 00:16:33.265 }, 00:16:33.265 { 00:16:33.265 "name": "mlx5_1", 00:16:33.265 "polls": 4921005, 00:16:33.265 "idle_polls": 4921005, 00:16:33.265 "completions": 0, 00:16:33.265 "requests": 0, 00:16:33.265 "request_latency": 0, 00:16:33.265 "pending_free_request": 0, 00:16:33.265 "pending_rdma_read": 0, 00:16:33.265 "pending_rdma_write": 0, 00:16:33.265 "pending_rdma_send": 0, 00:16:33.265 "total_send_wrs": 0, 00:16:33.265 "send_doorbell_updates": 0, 00:16:33.265 "total_recv_wrs": 4096, 00:16:33.265 "recv_doorbell_updates": 1 00:16:33.265 } 00:16:33.265 ] 00:16:33.265 } 00:16:33.265 ] 00:16:33.265 }, 00:16:33.265 { 00:16:33.265 "name": "nvmf_tgt_poll_group_002", 00:16:33.265 "admin_qpairs": 1, 00:16:33.265 "io_qpairs": 26, 00:16:33.265 "current_admin_qpairs": 0, 00:16:33.265 "current_io_qpairs": 0, 00:16:33.265 "pending_bdev_io": 0, 00:16:33.265 "completed_nvme_io": 126, 00:16:33.265 "transports": [ 00:16:33.265 { 00:16:33.265 "trtype": "RDMA", 00:16:33.265 "pending_data_buffer": 0, 00:16:33.265 "devices": [ 00:16:33.265 { 00:16:33.265 "name": "mlx5_0", 00:16:33.265 "polls": 4737560, 00:16:33.265 "idle_polls": 4737289, 00:16:33.265 "completions": 309, 00:16:33.265 "requests": 154, 00:16:33.265 "request_latency": 28217272, 00:16:33.265 "pending_free_request": 0, 00:16:33.265 "pending_rdma_read": 0, 00:16:33.265 "pending_rdma_write": 0, 00:16:33.265 "pending_rdma_send": 0, 00:16:33.265 "total_send_wrs": 268, 00:16:33.265 "send_doorbell_updates": 131, 00:16:33.265 "total_recv_wrs": 4250, 00:16:33.265 "recv_doorbell_updates": 131 00:16:33.265 }, 00:16:33.265 { 00:16:33.265 "name": "mlx5_1", 00:16:33.265 "polls": 4737560, 00:16:33.265 "idle_polls": 4737560, 00:16:33.265 "completions": 0, 00:16:33.265 "requests": 0, 00:16:33.265 "request_latency": 0, 00:16:33.265 "pending_free_request": 0, 00:16:33.265 "pending_rdma_read": 0, 00:16:33.265 "pending_rdma_write": 0, 00:16:33.265 "pending_rdma_send": 0, 00:16:33.265 "total_send_wrs": 0, 00:16:33.265 "send_doorbell_updates": 0, 00:16:33.265 "total_recv_wrs": 4096, 00:16:33.265 "recv_doorbell_updates": 1 00:16:33.265 } 00:16:33.265 ] 00:16:33.265 } 00:16:33.265 ] 00:16:33.266 }, 00:16:33.266 { 00:16:33.266 "name": "nvmf_tgt_poll_group_003", 00:16:33.266 "admin_qpairs": 2, 00:16:33.266 "io_qpairs": 26, 00:16:33.266 "current_admin_qpairs": 0, 00:16:33.266 "current_io_qpairs": 0, 00:16:33.266 "pending_bdev_io": 0, 00:16:33.266 "completed_nvme_io": 125, 00:16:33.266 "transports": [ 00:16:33.266 { 00:16:33.266 "trtype": "RDMA", 00:16:33.266 "pending_data_buffer": 0, 00:16:33.266 "devices": [ 00:16:33.266 { 00:16:33.266 "name": "mlx5_0", 00:16:33.266 "polls": 3322541, 00:16:33.266 "idle_polls": 3322225, 00:16:33.266 "completions": 358, 00:16:33.266 "requests": 179, 00:16:33.266 "request_latency": 39775992, 00:16:33.266 "pending_free_request": 0, 00:16:33.266 "pending_rdma_read": 0, 00:16:33.266 "pending_rdma_write": 0, 00:16:33.266 "pending_rdma_send": 0, 00:16:33.266 "total_send_wrs": 304, 00:16:33.266 "send_doorbell_updates": 153, 00:16:33.266 "total_recv_wrs": 4275, 00:16:33.266 "recv_doorbell_updates": 154 00:16:33.266 }, 00:16:33.266 { 00:16:33.266 "name": "mlx5_1", 00:16:33.266 "polls": 3322541, 00:16:33.266 "idle_polls": 3322541, 00:16:33.266 "completions": 0, 00:16:33.266 "requests": 0, 00:16:33.266 "request_latency": 0, 00:16:33.266 "pending_free_request": 0, 00:16:33.266 "pending_rdma_read": 0, 00:16:33.266 "pending_rdma_write": 0, 00:16:33.266 "pending_rdma_send": 0, 00:16:33.266 "total_send_wrs": 0, 00:16:33.266 "send_doorbell_updates": 0, 00:16:33.266 "total_recv_wrs": 4096, 00:16:33.266 "recv_doorbell_updates": 1 00:16:33.266 } 00:16:33.266 ] 00:16:33.266 } 00:16:33.266 ] 00:16:33.266 } 00:16:33.266 ] 00:16:33.266 }' 00:16:33.266 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:33.266 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:33.266 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:33.266 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:33.266 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:33.266 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:33.266 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:33.266 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:33.266 13:44:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 116464536 > 0 )) 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.266 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:33.266 rmmod nvme_rdma 00:16:33.266 rmmod nvme_fabrics 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2029370 ']' 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2029370 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 2029370 ']' 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 2029370 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2029370 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2029370' 00:16:33.526 killing process with pid 2029370 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 2029370 00:16:33.526 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 2029370 00:16:33.788 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.788 13:44:26 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:33.788 00:16:33.788 real 0m42.653s 00:16:33.788 user 2m22.873s 00:16:33.788 sys 0m6.608s 00:16:33.788 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:33.788 13:44:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.788 ************************************ 00:16:33.788 END TEST nvmf_rpc 00:16:33.788 ************************************ 00:16:33.788 13:44:26 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:16:33.788 13:44:26 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:33.788 13:44:26 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:33.788 13:44:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:33.788 ************************************ 00:16:33.788 START TEST nvmf_invalid 00:16:33.788 ************************************ 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:16:33.788 * Looking for test storage... 00:16:33.788 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.788 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:33.789 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:33.789 13:44:26 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:16:33.789 13:44:26 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:16:41.924 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:16:41.924 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:16:41.924 Found net devices under 0000:98:00.0: mlx_0_0 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:16:41.924 Found net devices under 0000:98:00.1: mlx_0_1 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:16:41.924 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:41.925 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:41.925 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:16:41.925 altname enp152s0f0np0 00:16:41.925 altname ens817f0np0 00:16:41.925 inet 192.168.100.8/24 scope global mlx_0_0 00:16:41.925 valid_lft forever preferred_lft forever 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:41.925 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:41.925 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:16:41.925 altname enp152s0f1np1 00:16:41.925 altname ens817f1np1 00:16:41.925 inet 192.168.100.9/24 scope global mlx_0_1 00:16:41.925 valid_lft forever preferred_lft forever 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:41.925 192.168.100.9' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:41.925 192.168.100.9' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:41.925 192.168.100.9' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2040298 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2040298 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 2040298 ']' 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:41.925 13:44:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:41.925 [2024-06-11 13:44:33.805753] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:16:41.925 [2024-06-11 13:44:33.805801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.925 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.925 [2024-06-11 13:44:33.869459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.925 [2024-06-11 13:44:33.934023] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.925 [2024-06-11 13:44:33.934059] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.925 [2024-06-11 13:44:33.934067] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.925 [2024-06-11 13:44:33.934073] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.925 [2024-06-11 13:44:33.934078] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.925 [2024-06-11 13:44:33.934268] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.925 [2024-06-11 13:44:33.934403] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.925 [2024-06-11 13:44:33.934528] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.925 [2024-06-11 13:44:33.934529] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.925 13:44:34 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:41.925 13:44:34 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:16:41.925 13:44:34 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.925 13:44:34 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:41.925 13:44:34 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:41.925 13:44:34 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.925 13:44:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:41.925 13:44:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21995 00:16:41.925 [2024-06-11 13:44:34.756929] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:41.925 13:44:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:41.925 { 00:16:41.926 "nqn": "nqn.2016-06.io.spdk:cnode21995", 00:16:41.926 "tgt_name": "foobar", 00:16:41.926 "method": "nvmf_create_subsystem", 00:16:41.926 "req_id": 1 00:16:41.926 } 00:16:41.926 Got JSON-RPC error response 00:16:41.926 response: 00:16:41.926 { 00:16:41.926 "code": -32603, 00:16:41.926 "message": "Unable to find target foobar" 00:16:41.926 }' 00:16:41.926 13:44:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:41.926 { 00:16:41.926 "nqn": "nqn.2016-06.io.spdk:cnode21995", 00:16:41.926 "tgt_name": "foobar", 00:16:41.926 "method": "nvmf_create_subsystem", 00:16:41.926 "req_id": 1 00:16:41.926 } 00:16:41.926 Got JSON-RPC error response 00:16:41.926 response: 00:16:41.926 { 00:16:41.926 "code": -32603, 00:16:41.926 "message": "Unable to find target foobar" 00:16:41.926 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:41.926 13:44:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:41.926 13:44:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9204 00:16:42.186 [2024-06-11 13:44:34.933528] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9204: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:42.186 13:44:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:42.186 { 00:16:42.186 "nqn": "nqn.2016-06.io.spdk:cnode9204", 00:16:42.186 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:42.186 "method": "nvmf_create_subsystem", 00:16:42.186 "req_id": 1 00:16:42.186 } 00:16:42.186 Got JSON-RPC error response 00:16:42.186 response: 00:16:42.186 { 00:16:42.186 "code": -32602, 00:16:42.186 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:42.186 }' 00:16:42.186 13:44:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:42.186 { 00:16:42.186 "nqn": "nqn.2016-06.io.spdk:cnode9204", 00:16:42.186 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:42.186 "method": "nvmf_create_subsystem", 00:16:42.186 "req_id": 1 00:16:42.186 } 00:16:42.186 Got JSON-RPC error response 00:16:42.186 response: 00:16:42.186 { 00:16:42.186 "code": -32602, 00:16:42.186 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:42.186 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:42.186 13:44:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:42.186 13:44:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16568 00:16:42.449 [2024-06-11 13:44:35.110121] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16568: invalid model number 'SPDK_Controller' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:42.449 { 00:16:42.449 "nqn": "nqn.2016-06.io.spdk:cnode16568", 00:16:42.449 "model_number": "SPDK_Controller\u001f", 00:16:42.449 "method": "nvmf_create_subsystem", 00:16:42.449 "req_id": 1 00:16:42.449 } 00:16:42.449 Got JSON-RPC error response 00:16:42.449 response: 00:16:42.449 { 00:16:42.449 "code": -32602, 00:16:42.449 "message": "Invalid MN SPDK_Controller\u001f" 00:16:42.449 }' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:42.449 { 00:16:42.449 "nqn": "nqn.2016-06.io.spdk:cnode16568", 00:16:42.449 "model_number": "SPDK_Controller\u001f", 00:16:42.449 "method": "nvmf_create_subsystem", 00:16:42.449 "req_id": 1 00:16:42.449 } 00:16:42.449 Got JSON-RPC error response 00:16:42.449 response: 00:16:42.449 { 00:16:42.449 "code": -32602, 00:16:42.449 "message": "Invalid MN SPDK_Controller\u001f" 00:16:42.449 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:42.449 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ J == \- ]] 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'J,%N5P*35+kM~==zSYx\<' 00:16:42.450 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'J,%N5P*35+kM~==zSYx\<' nqn.2016-06.io.spdk:cnode17254 00:16:42.711 [2024-06-11 13:44:35.443194] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17254: invalid serial number 'J,%N5P*35+kM~==zSYx\<' 00:16:42.711 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:42.711 { 00:16:42.711 "nqn": "nqn.2016-06.io.spdk:cnode17254", 00:16:42.711 "serial_number": "J,%N5P*35+kM~==zSYx\\<", 00:16:42.711 "method": "nvmf_create_subsystem", 00:16:42.711 "req_id": 1 00:16:42.711 } 00:16:42.711 Got JSON-RPC error response 00:16:42.711 response: 00:16:42.711 { 00:16:42.711 "code": -32602, 00:16:42.711 "message": "Invalid SN J,%N5P*35+kM~==zSYx\\<" 00:16:42.711 }' 00:16:42.711 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:42.711 { 00:16:42.711 "nqn": "nqn.2016-06.io.spdk:cnode17254", 00:16:42.711 "serial_number": "J,%N5P*35+kM~==zSYx\\<", 00:16:42.711 "method": "nvmf_create_subsystem", 00:16:42.711 "req_id": 1 00:16:42.711 } 00:16:42.711 Got JSON-RPC error response 00:16:42.711 response: 00:16:42.711 { 00:16:42.711 "code": -32602, 00:16:42.711 "message": "Invalid SN J,%N5P*35+kM~==zSYx\\<" 00:16:42.711 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:42.711 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:42.711 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.712 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:16:42.972 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:42.973 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:42.973 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ x == \- ]] 00:16:42.973 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'x7Q!(K]Xne|*cc0VuJQfv1GDCVsEof[RGZdVxaCM' 00:16:42.973 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'x7Q!(K]Xne|*cc0VuJQfv1GDCVsEof[RGZdVxaCM' nqn.2016-06.io.spdk:cnode5059 00:16:43.232 [2024-06-11 13:44:35.924758] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5059: invalid model number 'x7Q!(K]Xne|*cc0VuJQfv1GDCVsEof[RGZdVxaCM' 00:16:43.232 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:43.232 { 00:16:43.232 "nqn": "nqn.2016-06.io.spdk:cnode5059", 00:16:43.232 "model_number": "x7\u007fQ!(K]Xne|*cc0VuJQfv1GDCVsEof[RGZdVxaCM", 00:16:43.232 "method": "nvmf_create_subsystem", 00:16:43.232 "req_id": 1 00:16:43.232 } 00:16:43.232 Got JSON-RPC error response 00:16:43.232 response: 00:16:43.232 { 00:16:43.232 "code": -32602, 00:16:43.232 "message": "Invalid MN x7\u007fQ!(K]Xne|*cc0VuJQfv1GDCVsEof[RGZdVxaCM" 00:16:43.232 }' 00:16:43.232 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:43.232 { 00:16:43.232 "nqn": "nqn.2016-06.io.spdk:cnode5059", 00:16:43.232 "model_number": "x7\u007fQ!(K]Xne|*cc0VuJQfv1GDCVsEof[RGZdVxaCM", 00:16:43.232 "method": "nvmf_create_subsystem", 00:16:43.232 "req_id": 1 00:16:43.232 } 00:16:43.232 Got JSON-RPC error response 00:16:43.232 response: 00:16:43.232 { 00:16:43.232 "code": -32602, 00:16:43.232 "message": "Invalid MN x7\u007fQ!(K]Xne|*cc0VuJQfv1GDCVsEof[RGZdVxaCM" 00:16:43.232 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:43.232 13:44:35 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:16:43.232 [2024-06-11 13:44:36.125511] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x571790/0x575c80) succeed. 00:16:43.232 [2024-06-11 13:44:36.140085] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x572dd0/0x5b7310) succeed. 00:16:43.492 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:43.753 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:16:43.753 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:16:43.753 192.168.100.9' 00:16:43.753 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:43.753 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:16:43.753 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:16:43.753 [2024-06-11 13:44:36.598851] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:43.753 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:43.753 { 00:16:43.753 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:43.753 "listen_address": { 00:16:43.753 "trtype": "rdma", 00:16:43.753 "traddr": "192.168.100.8", 00:16:43.753 "trsvcid": "4421" 00:16:43.753 }, 00:16:43.753 "method": "nvmf_subsystem_remove_listener", 00:16:43.753 "req_id": 1 00:16:43.753 } 00:16:43.753 Got JSON-RPC error response 00:16:43.753 response: 00:16:43.753 { 00:16:43.753 "code": -32602, 00:16:43.753 "message": "Invalid parameters" 00:16:43.753 }' 00:16:43.753 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:43.753 { 00:16:43.753 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:43.753 "listen_address": { 00:16:43.753 "trtype": "rdma", 00:16:43.753 "traddr": "192.168.100.8", 00:16:43.753 "trsvcid": "4421" 00:16:43.753 }, 00:16:43.753 "method": "nvmf_subsystem_remove_listener", 00:16:43.753 "req_id": 1 00:16:43.753 } 00:16:43.753 Got JSON-RPC error response 00:16:43.753 response: 00:16:43.753 { 00:16:43.753 "code": -32602, 00:16:43.753 "message": "Invalid parameters" 00:16:43.753 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:43.753 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17883 -i 0 00:16:44.036 [2024-06-11 13:44:36.771417] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17883: invalid cntlid range [0-65519] 00:16:44.036 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:44.036 { 00:16:44.036 "nqn": "nqn.2016-06.io.spdk:cnode17883", 00:16:44.036 "min_cntlid": 0, 00:16:44.036 "method": "nvmf_create_subsystem", 00:16:44.036 "req_id": 1 00:16:44.036 } 00:16:44.036 Got JSON-RPC error response 00:16:44.036 response: 00:16:44.036 { 00:16:44.036 "code": -32602, 00:16:44.036 "message": "Invalid cntlid range [0-65519]" 00:16:44.036 }' 00:16:44.036 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:44.036 { 00:16:44.036 "nqn": "nqn.2016-06.io.spdk:cnode17883", 00:16:44.036 "min_cntlid": 0, 00:16:44.036 "method": "nvmf_create_subsystem", 00:16:44.036 "req_id": 1 00:16:44.036 } 00:16:44.036 Got JSON-RPC error response 00:16:44.036 response: 00:16:44.036 { 00:16:44.036 "code": -32602, 00:16:44.036 "message": "Invalid cntlid range [0-65519]" 00:16:44.036 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:44.036 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25948 -i 65520 00:16:44.326 [2024-06-11 13:44:36.943992] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25948: invalid cntlid range [65520-65519] 00:16:44.326 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:44.326 { 00:16:44.326 "nqn": "nqn.2016-06.io.spdk:cnode25948", 00:16:44.326 "min_cntlid": 65520, 00:16:44.326 "method": "nvmf_create_subsystem", 00:16:44.326 "req_id": 1 00:16:44.326 } 00:16:44.326 Got JSON-RPC error response 00:16:44.326 response: 00:16:44.326 { 00:16:44.326 "code": -32602, 00:16:44.326 "message": "Invalid cntlid range [65520-65519]" 00:16:44.326 }' 00:16:44.327 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:44.327 { 00:16:44.327 "nqn": "nqn.2016-06.io.spdk:cnode25948", 00:16:44.327 "min_cntlid": 65520, 00:16:44.327 "method": "nvmf_create_subsystem", 00:16:44.327 "req_id": 1 00:16:44.327 } 00:16:44.327 Got JSON-RPC error response 00:16:44.327 response: 00:16:44.327 { 00:16:44.327 "code": -32602, 00:16:44.327 "message": "Invalid cntlid range [65520-65519]" 00:16:44.327 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:44.327 13:44:36 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26653 -I 0 00:16:44.327 [2024-06-11 13:44:37.108574] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26653: invalid cntlid range [1-0] 00:16:44.327 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:44.327 { 00:16:44.327 "nqn": "nqn.2016-06.io.spdk:cnode26653", 00:16:44.327 "max_cntlid": 0, 00:16:44.327 "method": "nvmf_create_subsystem", 00:16:44.327 "req_id": 1 00:16:44.327 } 00:16:44.327 Got JSON-RPC error response 00:16:44.327 response: 00:16:44.327 { 00:16:44.327 "code": -32602, 00:16:44.327 "message": "Invalid cntlid range [1-0]" 00:16:44.327 }' 00:16:44.327 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:44.327 { 00:16:44.327 "nqn": "nqn.2016-06.io.spdk:cnode26653", 00:16:44.327 "max_cntlid": 0, 00:16:44.327 "method": "nvmf_create_subsystem", 00:16:44.327 "req_id": 1 00:16:44.327 } 00:16:44.327 Got JSON-RPC error response 00:16:44.327 response: 00:16:44.327 { 00:16:44.327 "code": -32602, 00:16:44.327 "message": "Invalid cntlid range [1-0]" 00:16:44.327 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:44.327 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1494 -I 65520 00:16:44.586 [2024-06-11 13:44:37.273165] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1494: invalid cntlid range [1-65520] 00:16:44.586 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:44.586 { 00:16:44.586 "nqn": "nqn.2016-06.io.spdk:cnode1494", 00:16:44.586 "max_cntlid": 65520, 00:16:44.586 "method": "nvmf_create_subsystem", 00:16:44.586 "req_id": 1 00:16:44.586 } 00:16:44.586 Got JSON-RPC error response 00:16:44.586 response: 00:16:44.586 { 00:16:44.586 "code": -32602, 00:16:44.586 "message": "Invalid cntlid range [1-65520]" 00:16:44.586 }' 00:16:44.586 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:44.586 { 00:16:44.586 "nqn": "nqn.2016-06.io.spdk:cnode1494", 00:16:44.586 "max_cntlid": 65520, 00:16:44.586 "method": "nvmf_create_subsystem", 00:16:44.586 "req_id": 1 00:16:44.586 } 00:16:44.586 Got JSON-RPC error response 00:16:44.586 response: 00:16:44.586 { 00:16:44.586 "code": -32602, 00:16:44.586 "message": "Invalid cntlid range [1-65520]" 00:16:44.586 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:44.586 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16425 -i 6 -I 5 00:16:44.586 [2024-06-11 13:44:37.445802] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16425: invalid cntlid range [6-5] 00:16:44.586 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:44.586 { 00:16:44.586 "nqn": "nqn.2016-06.io.spdk:cnode16425", 00:16:44.586 "min_cntlid": 6, 00:16:44.586 "max_cntlid": 5, 00:16:44.586 "method": "nvmf_create_subsystem", 00:16:44.586 "req_id": 1 00:16:44.586 } 00:16:44.586 Got JSON-RPC error response 00:16:44.586 response: 00:16:44.586 { 00:16:44.586 "code": -32602, 00:16:44.586 "message": "Invalid cntlid range [6-5]" 00:16:44.586 }' 00:16:44.586 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:44.586 { 00:16:44.586 "nqn": "nqn.2016-06.io.spdk:cnode16425", 00:16:44.586 "min_cntlid": 6, 00:16:44.586 "max_cntlid": 5, 00:16:44.586 "method": "nvmf_create_subsystem", 00:16:44.586 "req_id": 1 00:16:44.586 } 00:16:44.586 Got JSON-RPC error response 00:16:44.586 response: 00:16:44.586 { 00:16:44.586 "code": -32602, 00:16:44.586 "message": "Invalid cntlid range [6-5]" 00:16:44.586 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:44.586 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:44.845 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:44.845 { 00:16:44.845 "name": "foobar", 00:16:44.845 "method": "nvmf_delete_target", 00:16:44.845 "req_id": 1 00:16:44.845 } 00:16:44.845 Got JSON-RPC error response 00:16:44.845 response: 00:16:44.845 { 00:16:44.846 "code": -32602, 00:16:44.846 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:44.846 }' 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:44.846 { 00:16:44.846 "name": "foobar", 00:16:44.846 "method": "nvmf_delete_target", 00:16:44.846 "req_id": 1 00:16:44.846 } 00:16:44.846 Got JSON-RPC error response 00:16:44.846 response: 00:16:44.846 { 00:16:44.846 "code": -32602, 00:16:44.846 "message": "The specified target doesn't exist, cannot delete it." 00:16:44.846 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:44.846 rmmod nvme_rdma 00:16:44.846 rmmod nvme_fabrics 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2040298 ']' 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2040298 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 2040298 ']' 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 2040298 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2040298 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2040298' 00:16:44.846 killing process with pid 2040298 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 2040298 00:16:44.846 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 2040298 00:16:45.105 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:45.105 13:44:37 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:45.105 00:16:45.105 real 0m11.352s 00:16:45.105 user 0m19.861s 00:16:45.105 sys 0m6.220s 00:16:45.105 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:45.105 13:44:37 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:45.105 ************************************ 00:16:45.105 END TEST nvmf_invalid 00:16:45.105 ************************************ 00:16:45.105 13:44:37 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:16:45.105 13:44:37 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:45.105 13:44:37 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:45.105 13:44:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:45.105 ************************************ 00:16:45.105 START TEST nvmf_abort 00:16:45.105 ************************************ 00:16:45.105 13:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:16:45.365 * Looking for test storage... 00:16:45.365 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.365 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.366 13:44:38 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:16:51.951 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:16:51.951 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:16:51.951 Found net devices under 0000:98:00.0: mlx_0_0 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:16:51.951 Found net devices under 0000:98:00.1: mlx_0_1 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:16:51.951 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:51.952 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:52.213 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:52.213 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:16:52.213 altname enp152s0f0np0 00:16:52.213 altname ens817f0np0 00:16:52.213 inet 192.168.100.8/24 scope global mlx_0_0 00:16:52.213 valid_lft forever preferred_lft forever 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:52.213 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:52.213 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:16:52.213 altname enp152s0f1np1 00:16:52.213 altname ens817f1np1 00:16:52.213 inet 192.168.100.9/24 scope global mlx_0_1 00:16:52.213 valid_lft forever preferred_lft forever 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:52.213 192.168.100.9' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:52.213 192.168.100.9' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:52.213 192.168.100.9' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:52.213 13:44:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:52.213 13:44:45 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:16:52.213 13:44:45 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.213 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:52.213 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:52.213 13:44:45 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2045015 00:16:52.213 13:44:45 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2045015 00:16:52.213 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 2045015 ']' 00:16:52.213 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.213 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:52.214 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.214 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:52.214 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:52.214 13:44:45 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:52.214 [2024-06-11 13:44:45.075943] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:16:52.214 [2024-06-11 13:44:45.076016] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.214 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.474 [2024-06-11 13:44:45.160980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:52.474 [2024-06-11 13:44:45.255438] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.474 [2024-06-11 13:44:45.255498] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.474 [2024-06-11 13:44:45.255507] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.474 [2024-06-11 13:44:45.255514] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.474 [2024-06-11 13:44:45.255519] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.474 [2024-06-11 13:44:45.255652] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.474 [2024-06-11 13:44:45.255817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.474 [2024-06-11 13:44:45.255819] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.046 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:53.046 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:16:53.046 13:44:45 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.046 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:53.046 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:53.046 13:44:45 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.046 13:44:45 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:16:53.046 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.046 13:44:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:53.046 [2024-06-11 13:44:45.936176] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10305d0/0x1034ac0) succeed. 00:16:53.046 [2024-06-11 13:44:45.950147] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1031b70/0x1076150) succeed. 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:53.306 Malloc0 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:53.306 Delay0 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:53.306 [2024-06-11 13:44:46.115092] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:53.306 13:44:46 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:16:53.306 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.306 [2024-06-11 13:44:46.215012] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:55.851 Initializing NVMe Controllers 00:16:55.851 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:16:55.851 controller IO queue size 128 less than required 00:16:55.851 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:16:55.851 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:55.851 Initialization complete. Launching workers. 00:16:55.851 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38058 00:16:55.851 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38119, failed to submit 62 00:16:55.851 success 38059, unsuccess 60, failed 0 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:55.851 rmmod nvme_rdma 00:16:55.851 rmmod nvme_fabrics 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2045015 ']' 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2045015 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 2045015 ']' 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 2045015 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2045015 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2045015' 00:16:55.851 killing process with pid 2045015 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@968 -- # kill 2045015 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@973 -- # wait 2045015 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:55.851 00:16:55.851 real 0m10.697s 00:16:55.851 user 0m14.350s 00:16:55.851 sys 0m5.604s 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:55.851 13:44:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:55.851 ************************************ 00:16:55.851 END TEST nvmf_abort 00:16:55.851 ************************************ 00:16:55.851 13:44:48 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:16:55.851 13:44:48 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:55.851 13:44:48 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:55.851 13:44:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:55.851 ************************************ 00:16:55.851 START TEST nvmf_ns_hotplug_stress 00:16:55.851 ************************************ 00:16:55.851 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:16:56.113 * Looking for test storage... 00:16:56.113 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.113 13:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:17:04.252 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:17:04.252 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:04.252 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:17:04.253 Found net devices under 0000:98:00.0: mlx_0_0 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:17:04.253 Found net devices under 0000:98:00.1: mlx_0_1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:04.253 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:04.253 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:17:04.253 altname enp152s0f0np0 00:17:04.253 altname ens817f0np0 00:17:04.253 inet 192.168.100.8/24 scope global mlx_0_0 00:17:04.253 valid_lft forever preferred_lft forever 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:04.253 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:04.253 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:17:04.253 altname enp152s0f1np1 00:17:04.253 altname ens817f1np1 00:17:04.253 inet 192.168.100.9/24 scope global mlx_0_1 00:17:04.253 valid_lft forever preferred_lft forever 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:04.253 192.168.100.9' 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:04.253 192.168.100.9' 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:04.253 192.168.100.9' 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:17:04.253 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2049568 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2049568 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 2049568 ']' 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:04.254 13:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.254 [2024-06-11 13:44:55.933604] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:17:04.254 [2024-06-11 13:44:55.933673] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.254 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.254 [2024-06-11 13:44:56.017518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:04.254 [2024-06-11 13:44:56.110142] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.254 [2024-06-11 13:44:56.110205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.254 [2024-06-11 13:44:56.110213] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.254 [2024-06-11 13:44:56.110220] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.254 [2024-06-11 13:44:56.110226] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.254 [2024-06-11 13:44:56.110383] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.254 [2024-06-11 13:44:56.110665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.254 [2024-06-11 13:44:56.110667] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.254 13:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:04.254 13:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:17:04.254 13:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:04.254 13:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:04.254 13:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:17:04.254 13:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.254 13:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:17:04.254 13:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:04.254 [2024-06-11 13:44:56.923174] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aab5d0/0x1aafac0) succeed. 00:17:04.254 [2024-06-11 13:44:56.937187] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aacb70/0x1af1150) succeed. 00:17:04.254 13:44:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:04.514 13:44:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:04.514 [2024-06-11 13:44:57.370998] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:04.514 13:44:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:04.774 13:44:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:17:05.034 Malloc0 00:17:05.034 13:44:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:05.034 Delay0 00:17:05.034 13:44:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:05.295 13:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:17:05.557 NULL1 00:17:05.557 13:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:05.557 13:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:17:05.557 13:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2049968 00:17:05.557 13:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:05.557 13:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:05.557 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.816 Read completed with error (sct=0, sc=11) 00:17:05.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.816 13:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:05.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:05.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:06.077 13:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:17:06.077 13:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:17:06.077 [2024-06-11 13:44:58.875526] bdev.c:5000:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:17:06.077 true 00:17:06.077 13:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:06.077 13:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:07.018 13:44:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:07.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:07.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:07.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:07.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:07.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:07.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:07.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:07.018 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:07.018 13:44:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:17:07.019 13:44:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:17:07.280 true 00:17:07.280 13:45:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:07.280 13:45:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:08.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:08.223 13:45:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:08.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:08.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:08.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:08.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:08.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:08.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:08.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:08.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:08.223 13:45:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:17:08.223 13:45:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:17:08.484 true 00:17:08.484 13:45:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:08.484 13:45:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:09.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:09.429 13:45:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:09.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:09.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:09.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:09.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:09.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:09.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:09.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:09.429 13:45:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:17:09.429 13:45:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:17:09.690 true 00:17:09.690 13:45:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:09.690 13:45:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:10.631 13:45:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:10.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:10.631 13:45:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:17:10.631 13:45:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:17:10.891 true 00:17:10.891 13:45:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:10.891 13:45:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:11.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:11.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:11.831 13:45:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:11.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:11.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:11.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:11.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:11.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:11.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:11.831 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:11.831 13:45:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:17:11.831 13:45:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:17:12.090 true 00:17:12.090 13:45:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:12.090 13:45:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:13.030 13:45:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:13.030 13:45:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:17:13.030 13:45:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:17:13.290 true 00:17:13.290 13:45:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:13.290 13:45:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:14.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:14.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:14.232 13:45:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:14.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:14.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:14.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:14.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:14.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:14.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:14.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:14.232 13:45:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:17:14.232 13:45:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:17:14.493 true 00:17:14.493 13:45:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:14.493 13:45:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:15.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:15.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:15.434 13:45:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:15.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:15.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:15.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:15.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:15.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:15.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:15.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:15.434 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:15.434 13:45:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:17:15.434 13:45:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:17:15.434 true 00:17:15.693 13:45:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:15.693 13:45:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:16.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:16.634 13:45:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:16.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:16.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:16.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:16.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:16.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:16.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:16.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:16.635 13:45:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:17:16.635 13:45:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:17:16.635 true 00:17:16.635 13:45:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:16.635 13:45:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:17.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:17.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:17.577 13:45:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:17.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:17.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:17.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:17.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:17.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:17.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:17.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:17.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:17.838 13:45:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:17:17.838 13:45:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:17:17.838 true 00:17:17.838 13:45:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:17.838 13:45:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:18.781 13:45:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:18.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:18.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:18.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:18.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:18.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:18.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:18.781 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:19.043 13:45:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:17:19.043 13:45:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:17:19.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:19.043 true 00:17:19.043 13:45:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:19.043 13:45:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:20.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:20.427 13:45:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:20.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:20.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:20.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:20.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:20.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:20.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:20.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:20.427 13:45:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:17:20.428 13:45:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:17:20.428 true 00:17:20.428 13:45:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:20.428 13:45:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:21.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:21.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:21.370 13:45:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:21.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:21.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:21.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:21.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:21.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:21.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:21.370 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:21.370 13:45:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:17:21.370 13:45:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:17:21.665 true 00:17:21.665 13:45:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:21.665 13:45:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:22.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.642 13:45:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:22.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.642 13:45:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:17:22.642 13:45:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:17:22.902 true 00:17:22.902 13:45:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:22.902 13:45:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:23.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:23.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:23.845 13:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:23.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:23.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:23.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:23.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:23.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:23.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:23.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:23.845 13:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:17:23.845 13:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:17:23.845 true 00:17:24.105 13:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:24.105 13:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:24.676 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:24.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:24.936 13:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:24.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:24.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:24.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:24.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:24.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:24.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:24.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:24.936 13:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:17:24.936 13:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:17:25.195 true 00:17:25.195 13:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:25.195 13:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:26.136 13:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:26.136 13:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:17:26.136 13:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:17:26.397 true 00:17:26.397 13:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:26.397 13:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:27.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:27.342 13:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:27.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:27.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:27.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:27.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:27.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:27.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:27.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:27.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:27.342 13:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:17:27.342 13:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:17:27.603 true 00:17:27.603 13:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:27.603 13:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:28.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:28.543 13:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:28.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:28.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:28.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:28.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:28.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:28.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:28.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:28.544 13:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:17:28.544 13:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:17:28.803 true 00:17:28.803 13:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:28.803 13:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:29.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:29.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:29.743 13:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:29.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:29.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:29.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:29.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:29.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:29.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:29.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:29.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:29.743 13:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:17:29.743 13:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:17:29.743 true 00:17:30.004 13:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:30.004 13:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:30.946 13:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:30.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:30.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:30.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:30.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:30.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:30.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:30.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:30.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:30.946 13:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:17:30.946 13:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:17:30.946 true 00:17:31.206 13:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:31.206 13:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:31.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:32.035 13:45:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:32.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:32.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:32.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:32.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:32.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:32.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:32.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:32.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:32.035 13:45:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:17:32.035 13:45:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:17:32.299 true 00:17:32.299 13:45:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:32.299 13:45:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:33.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:33.238 13:45:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:33.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:33.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:33.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:33.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:33.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:33.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:33.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:33.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:33.498 13:45:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:17:33.498 13:45:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:17:33.498 true 00:17:33.498 13:45:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:33.498 13:45:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:34.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:34.439 13:45:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:34.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:34.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:34.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:34.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:34.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:34.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:34.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:34.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:34.700 13:45:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:17:34.700 13:45:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:17:34.700 true 00:17:34.700 13:45:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:34.700 13:45:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:35.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:35.641 13:45:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:35.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:35.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:35.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:35.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:35.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:35.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:35.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:35.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:35.901 13:45:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:17:35.901 13:45:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:17:35.901 true 00:17:35.901 13:45:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:35.901 13:45:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:36.843 13:45:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:36.843 13:45:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:17:36.843 13:45:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:17:37.103 true 00:17:37.103 13:45:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:37.103 13:45:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:37.365 13:45:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:37.365 13:45:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:17:37.365 13:45:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:17:37.626 true 00:17:37.626 13:45:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:37.626 13:45:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:37.887 13:45:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:37.887 13:45:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:17:37.887 13:45:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:17:38.147 true 00:17:38.147 13:45:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:38.147 13:45:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:38.407 13:45:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:38.407 13:45:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:17:38.407 13:45:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:17:38.669 true 00:17:38.669 13:45:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:38.669 13:45:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:38.669 Initializing NVMe Controllers 00:17:38.669 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:38.669 Controller IO queue size 128, less than required. 00:17:38.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:38.669 Controller IO queue size 128, less than required. 00:17:38.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:38.669 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:38.669 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:38.669 Initialization complete. Launching workers. 00:17:38.669 ======================================================== 00:17:38.669 Latency(us) 00:17:38.669 Device Information : IOPS MiB/s Average min max 00:17:38.669 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7401.20 3.61 15630.65 1279.06 1186966.42 00:17:38.669 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 40154.73 19.61 3187.40 1472.62 393429.90 00:17:38.669 ======================================================== 00:17:38.669 Total : 47555.93 23.22 5123.96 1279.06 1186966.42 00:17:38.669 00:17:38.930 13:45:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:38.930 13:45:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:17:38.930 13:45:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:17:39.190 true 00:17:39.190 13:45:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2049968 00:17:39.190 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2049968) - No such process 00:17:39.190 13:45:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2049968 00:17:39.190 13:45:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:39.190 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:39.449 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:17:39.449 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:17:39.449 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:17:39.449 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:39.449 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:17:39.708 null0 00:17:39.708 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:39.708 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:39.708 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:17:39.708 null1 00:17:39.708 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:39.708 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:39.708 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:17:39.967 null2 00:17:39.967 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:39.967 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:39.967 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:17:39.967 null3 00:17:40.227 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:40.227 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:40.227 13:45:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:17:40.227 null4 00:17:40.227 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:40.227 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:40.227 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:17:40.487 null5 00:17:40.487 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:40.487 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:40.487 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:17:40.487 null6 00:17:40.488 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:40.488 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:40.488 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:17:40.749 null7 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2057599 2057601 2057604 2057607 2057609 2057612 2057615 2057619 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:40.749 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:40.750 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.011 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:41.272 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.272 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.272 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:41.272 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.272 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.272 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:41.272 13:45:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:41.272 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:41.272 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:41.272 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:41.272 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:41.272 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:41.272 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:41.272 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:41.272 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.272 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.272 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:41.533 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.533 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.533 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:41.533 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.533 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.533 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:41.533 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:41.534 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:41.796 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:42.057 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:42.058 13:45:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.319 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:42.580 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:42.841 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.102 13:45:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.363 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:43.622 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.622 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.622 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:43.622 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:43.622 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.622 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.622 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:43.622 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.623 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:43.883 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:44.144 13:45:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:44.144 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:44.144 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:44.404 rmmod nvme_rdma 00:17:44.404 rmmod nvme_fabrics 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2049568 ']' 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2049568 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 2049568 ']' 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 2049568 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2049568 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2049568' 00:17:44.404 killing process with pid 2049568 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 2049568 00:17:44.404 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 2049568 00:17:44.665 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:44.665 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:44.665 00:17:44.665 real 0m48.701s 00:17:44.665 user 3m16.601s 00:17:44.665 sys 0m12.246s 00:17:44.665 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:44.665 13:45:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.665 ************************************ 00:17:44.665 END TEST nvmf_ns_hotplug_stress 00:17:44.665 ************************************ 00:17:44.665 13:45:37 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:17:44.665 13:45:37 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:44.665 13:45:37 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:44.665 13:45:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:44.665 ************************************ 00:17:44.665 START TEST nvmf_connect_stress 00:17:44.665 ************************************ 00:17:44.665 13:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:17:44.926 * Looking for test storage... 00:17:44.926 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.926 13:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:17:51.625 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:17:51.625 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:17:51.625 Found net devices under 0000:98:00.0: mlx_0_0 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:17:51.625 Found net devices under 0000:98:00.1: mlx_0_1 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.625 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:51.626 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:51.626 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:17:51.626 altname enp152s0f0np0 00:17:51.626 altname ens817f0np0 00:17:51.626 inet 192.168.100.8/24 scope global mlx_0_0 00:17:51.626 valid_lft forever preferred_lft forever 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:51.626 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:51.626 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:17:51.626 altname enp152s0f1np1 00:17:51.626 altname ens817f1np1 00:17:51.626 inet 192.168.100.9/24 scope global mlx_0_1 00:17:51.626 valid_lft forever preferred_lft forever 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:51.626 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:51.888 192.168.100.9' 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:51.888 192.168.100.9' 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:51.888 192.168.100.9' 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2062196 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2062196 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 2062196 ']' 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:51.888 13:45:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.888 [2024-06-11 13:45:44.669427] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:17:51.888 [2024-06-11 13:45:44.669479] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.888 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.888 [2024-06-11 13:45:44.749682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.149 [2024-06-11 13:45:44.824707] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.149 [2024-06-11 13:45:44.824756] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.149 [2024-06-11 13:45:44.824764] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.149 [2024-06-11 13:45:44.824771] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.149 [2024-06-11 13:45:44.824777] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.149 [2024-06-11 13:45:44.824899] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.149 [2024-06-11 13:45:44.825080] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.149 [2024-06-11 13:45:44.825259] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.723 [2024-06-11 13:45:45.511498] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c585d0/0x1c5cac0) succeed. 00:17:52.723 [2024-06-11 13:45:45.525628] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c59b70/0x1c9e150) succeed. 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.723 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.985 [2024-06-11 13:45:45.646230] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.985 NULL1 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2062524 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.985 13:45:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.246 13:45:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.246 13:45:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:53.246 13:45:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.246 13:45:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.246 13:45:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.817 13:45:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.817 13:45:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:53.817 13:45:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:53.817 13:45:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.817 13:45:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.079 13:45:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.079 13:45:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:54.079 13:45:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.079 13:45:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.079 13:45:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.340 13:45:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.340 13:45:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:54.340 13:45:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.340 13:45:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.340 13:45:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.602 13:45:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.602 13:45:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:54.602 13:45:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.602 13:45:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.602 13:45:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:54.863 13:45:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.863 13:45:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:54.863 13:45:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:54.863 13:45:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.863 13:45:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.433 13:45:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.433 13:45:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:55.433 13:45:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.433 13:45:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.433 13:45:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.694 13:45:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.694 13:45:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:55.694 13:45:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.694 13:45:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.694 13:45:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:55.956 13:45:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.956 13:45:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:55.956 13:45:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:55.956 13:45:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.956 13:45:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.217 13:45:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.217 13:45:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:56.217 13:45:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.217 13:45:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.217 13:45:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:56.478 13:45:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.478 13:45:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:56.478 13:45:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:56.478 13:45:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.478 13:45:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.050 13:45:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.050 13:45:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:57.050 13:45:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.050 13:45:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.050 13:45:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.311 13:45:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.311 13:45:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:57.311 13:45:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.311 13:45:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.311 13:45:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.572 13:45:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.572 13:45:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:57.572 13:45:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.572 13:45:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.572 13:45:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:57.833 13:45:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.833 13:45:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:57.833 13:45:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:57.833 13:45:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.833 13:45:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.404 13:45:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.404 13:45:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:58.404 13:45:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.404 13:45:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.404 13:45:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.666 13:45:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.666 13:45:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:58.666 13:45:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.666 13:45:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.666 13:45:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:58.927 13:45:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.927 13:45:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:58.927 13:45:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:58.927 13:45:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.927 13:45:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.188 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.188 13:45:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:59.188 13:45:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.188 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.188 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:59.448 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.448 13:45:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:17:59.448 13:45:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:59.448 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.448 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.020 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.020 13:45:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:18:00.020 13:45:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.020 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.020 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.280 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.280 13:45:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:18:00.280 13:45:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.280 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.280 13:45:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.541 13:45:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.541 13:45:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:18:00.541 13:45:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.541 13:45:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.541 13:45:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:00.802 13:45:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.802 13:45:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:18:00.802 13:45:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:00.802 13:45:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.802 13:45:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.063 13:45:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.063 13:45:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:18:01.063 13:45:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.063 13:45:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.063 13:45:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.637 13:45:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.637 13:45:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:18:01.637 13:45:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.637 13:45:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.637 13:45:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:01.898 13:45:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.898 13:45:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:18:01.898 13:45:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:01.898 13:45:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.898 13:45:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.159 13:45:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.159 13:45:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:18:02.159 13:45:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.159 13:45:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.159 13:45:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.420 13:45:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.420 13:45:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:18:02.420 13:45:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.420 13:45:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.420 13:45:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.991 13:45:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.991 13:45:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:18:02.991 13:45:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:02.991 13:45:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.991 13:45:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:02.991 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2062524 00:18:03.252 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2062524) - No such process 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2062524 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:03.252 rmmod nvme_rdma 00:18:03.252 rmmod nvme_fabrics 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2062196 ']' 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2062196 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 2062196 ']' 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 2062196 00:18:03.252 13:45:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:18:03.252 13:45:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:03.252 13:45:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2062196 00:18:03.252 13:45:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:03.252 13:45:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:03.252 13:45:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2062196' 00:18:03.252 killing process with pid 2062196 00:18:03.252 13:45:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 2062196 00:18:03.252 13:45:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 2062196 00:18:03.515 13:45:56 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:03.515 13:45:56 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:03.515 00:18:03.515 real 0m18.710s 00:18:03.515 user 0m41.590s 00:18:03.515 sys 0m6.698s 00:18:03.515 13:45:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:03.515 13:45:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:03.515 ************************************ 00:18:03.515 END TEST nvmf_connect_stress 00:18:03.515 ************************************ 00:18:03.515 13:45:56 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:03.515 13:45:56 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:03.515 13:45:56 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:03.515 13:45:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:03.515 ************************************ 00:18:03.515 START TEST nvmf_fused_ordering 00:18:03.515 ************************************ 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:18:03.515 * Looking for test storage... 00:18:03.515 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:03.515 13:45:56 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.776 13:45:56 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.776 13:45:56 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.776 13:45:56 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.776 13:45:56 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.776 13:45:56 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.776 13:45:56 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:18:03.777 13:45:56 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.366 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:18:10.367 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:18:10.367 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:18:10.367 Found net devices under 0000:98:00.0: mlx_0_0 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:18:10.367 Found net devices under 0000:98:00.1: mlx_0_1 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:10.367 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:10.628 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.628 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:18:10.628 altname enp152s0f0np0 00:18:10.628 altname ens817f0np0 00:18:10.628 inet 192.168.100.8/24 scope global mlx_0_0 00:18:10.628 valid_lft forever preferred_lft forever 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:10.628 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:10.629 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.629 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:18:10.629 altname enp152s0f1np1 00:18:10.629 altname ens817f1np1 00:18:10.629 inet 192.168.100.9/24 scope global mlx_0_1 00:18:10.629 valid_lft forever preferred_lft forever 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:10.629 192.168.100.9' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:10.629 192.168.100.9' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:10.629 192.168.100.9' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2068285 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2068285 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 2068285 ']' 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:10.629 13:46:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:10.891 [2024-06-11 13:46:03.546497] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:18:10.891 [2024-06-11 13:46:03.546554] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.891 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.891 [2024-06-11 13:46:03.626116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.891 [2024-06-11 13:46:03.712549] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.891 [2024-06-11 13:46:03.712609] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.891 [2024-06-11 13:46:03.712618] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.891 [2024-06-11 13:46:03.712625] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.891 [2024-06-11 13:46:03.712631] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.891 [2024-06-11 13:46:03.712665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.463 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:11.463 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:18:11.463 13:46:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.463 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:11.463 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:11.463 13:46:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.463 13:46:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:11.463 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.463 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:11.726 [2024-06-11 13:46:04.406055] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a0efb0/0x1a134a0) succeed. 00:18:11.726 [2024-06-11 13:46:04.419836] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a104b0/0x1a54b30) succeed. 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:11.726 [2024-06-11 13:46:04.484666] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:11.726 NULL1 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.726 13:46:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:11.726 [2024-06-11 13:46:04.543077] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:18:11.726 [2024-06-11 13:46:04.543151] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068632 ] 00:18:11.726 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.987 Attached to nqn.2016-06.io.spdk:cnode1 00:18:11.987 Namespace ID: 1 size: 1GB 00:18:11.987 fused_ordering(0) 00:18:11.987 fused_ordering(1) 00:18:11.987 fused_ordering(2) 00:18:11.987 fused_ordering(3) 00:18:11.987 fused_ordering(4) 00:18:11.987 fused_ordering(5) 00:18:11.987 fused_ordering(6) 00:18:11.987 fused_ordering(7) 00:18:11.987 fused_ordering(8) 00:18:11.987 fused_ordering(9) 00:18:11.987 fused_ordering(10) 00:18:11.987 fused_ordering(11) 00:18:11.987 fused_ordering(12) 00:18:11.987 fused_ordering(13) 00:18:11.987 fused_ordering(14) 00:18:11.987 fused_ordering(15) 00:18:11.987 fused_ordering(16) 00:18:11.987 fused_ordering(17) 00:18:11.987 fused_ordering(18) 00:18:11.987 fused_ordering(19) 00:18:11.987 fused_ordering(20) 00:18:11.987 fused_ordering(21) 00:18:11.987 fused_ordering(22) 00:18:11.987 fused_ordering(23) 00:18:11.987 fused_ordering(24) 00:18:11.987 fused_ordering(25) 00:18:11.987 fused_ordering(26) 00:18:11.987 fused_ordering(27) 00:18:11.987 fused_ordering(28) 00:18:11.987 fused_ordering(29) 00:18:11.987 fused_ordering(30) 00:18:11.987 fused_ordering(31) 00:18:11.987 fused_ordering(32) 00:18:11.987 fused_ordering(33) 00:18:11.987 fused_ordering(34) 00:18:11.987 fused_ordering(35) 00:18:11.987 fused_ordering(36) 00:18:11.987 fused_ordering(37) 00:18:11.987 fused_ordering(38) 00:18:11.987 fused_ordering(39) 00:18:11.987 fused_ordering(40) 00:18:11.988 fused_ordering(41) 00:18:11.988 fused_ordering(42) 00:18:11.988 fused_ordering(43) 00:18:11.988 fused_ordering(44) 00:18:11.988 fused_ordering(45) 00:18:11.988 fused_ordering(46) 00:18:11.988 fused_ordering(47) 00:18:11.988 fused_ordering(48) 00:18:11.988 fused_ordering(49) 00:18:11.988 fused_ordering(50) 00:18:11.988 fused_ordering(51) 00:18:11.988 fused_ordering(52) 00:18:11.988 fused_ordering(53) 00:18:11.988 fused_ordering(54) 00:18:11.988 fused_ordering(55) 00:18:11.988 fused_ordering(56) 00:18:11.988 fused_ordering(57) 00:18:11.988 fused_ordering(58) 00:18:11.988 fused_ordering(59) 00:18:11.988 fused_ordering(60) 00:18:11.988 fused_ordering(61) 00:18:11.988 fused_ordering(62) 00:18:11.988 fused_ordering(63) 00:18:11.988 fused_ordering(64) 00:18:11.988 fused_ordering(65) 00:18:11.988 fused_ordering(66) 00:18:11.988 fused_ordering(67) 00:18:11.988 fused_ordering(68) 00:18:11.988 fused_ordering(69) 00:18:11.988 fused_ordering(70) 00:18:11.988 fused_ordering(71) 00:18:11.988 fused_ordering(72) 00:18:11.988 fused_ordering(73) 00:18:11.988 fused_ordering(74) 00:18:11.988 fused_ordering(75) 00:18:11.988 fused_ordering(76) 00:18:11.988 fused_ordering(77) 00:18:11.988 fused_ordering(78) 00:18:11.988 fused_ordering(79) 00:18:11.988 fused_ordering(80) 00:18:11.988 fused_ordering(81) 00:18:11.988 fused_ordering(82) 00:18:11.988 fused_ordering(83) 00:18:11.988 fused_ordering(84) 00:18:11.988 fused_ordering(85) 00:18:11.988 fused_ordering(86) 00:18:11.988 fused_ordering(87) 00:18:11.988 fused_ordering(88) 00:18:11.988 fused_ordering(89) 00:18:11.988 fused_ordering(90) 00:18:11.988 fused_ordering(91) 00:18:11.988 fused_ordering(92) 00:18:11.988 fused_ordering(93) 00:18:11.988 fused_ordering(94) 00:18:11.988 fused_ordering(95) 00:18:11.988 fused_ordering(96) 00:18:11.988 fused_ordering(97) 00:18:11.988 fused_ordering(98) 00:18:11.988 fused_ordering(99) 00:18:11.988 fused_ordering(100) 00:18:11.988 fused_ordering(101) 00:18:11.988 fused_ordering(102) 00:18:11.988 fused_ordering(103) 00:18:11.988 fused_ordering(104) 00:18:11.988 fused_ordering(105) 00:18:11.988 fused_ordering(106) 00:18:11.988 fused_ordering(107) 00:18:11.988 fused_ordering(108) 00:18:11.988 fused_ordering(109) 00:18:11.988 fused_ordering(110) 00:18:11.988 fused_ordering(111) 00:18:11.988 fused_ordering(112) 00:18:11.988 fused_ordering(113) 00:18:11.988 fused_ordering(114) 00:18:11.988 fused_ordering(115) 00:18:11.988 fused_ordering(116) 00:18:11.988 fused_ordering(117) 00:18:11.988 fused_ordering(118) 00:18:11.988 fused_ordering(119) 00:18:11.988 fused_ordering(120) 00:18:11.988 fused_ordering(121) 00:18:11.988 fused_ordering(122) 00:18:11.988 fused_ordering(123) 00:18:11.988 fused_ordering(124) 00:18:11.988 fused_ordering(125) 00:18:11.988 fused_ordering(126) 00:18:11.988 fused_ordering(127) 00:18:11.988 fused_ordering(128) 00:18:11.988 fused_ordering(129) 00:18:11.988 fused_ordering(130) 00:18:11.988 fused_ordering(131) 00:18:11.988 fused_ordering(132) 00:18:11.988 fused_ordering(133) 00:18:11.988 fused_ordering(134) 00:18:11.988 fused_ordering(135) 00:18:11.988 fused_ordering(136) 00:18:11.988 fused_ordering(137) 00:18:11.988 fused_ordering(138) 00:18:11.988 fused_ordering(139) 00:18:11.988 fused_ordering(140) 00:18:11.988 fused_ordering(141) 00:18:11.988 fused_ordering(142) 00:18:11.988 fused_ordering(143) 00:18:11.988 fused_ordering(144) 00:18:11.988 fused_ordering(145) 00:18:11.988 fused_ordering(146) 00:18:11.988 fused_ordering(147) 00:18:11.988 fused_ordering(148) 00:18:11.988 fused_ordering(149) 00:18:11.988 fused_ordering(150) 00:18:11.988 fused_ordering(151) 00:18:11.988 fused_ordering(152) 00:18:11.988 fused_ordering(153) 00:18:11.988 fused_ordering(154) 00:18:11.988 fused_ordering(155) 00:18:11.988 fused_ordering(156) 00:18:11.988 fused_ordering(157) 00:18:11.988 fused_ordering(158) 00:18:11.988 fused_ordering(159) 00:18:11.988 fused_ordering(160) 00:18:11.988 fused_ordering(161) 00:18:11.988 fused_ordering(162) 00:18:11.988 fused_ordering(163) 00:18:11.988 fused_ordering(164) 00:18:11.988 fused_ordering(165) 00:18:11.988 fused_ordering(166) 00:18:11.988 fused_ordering(167) 00:18:11.988 fused_ordering(168) 00:18:11.988 fused_ordering(169) 00:18:11.988 fused_ordering(170) 00:18:11.988 fused_ordering(171) 00:18:11.988 fused_ordering(172) 00:18:11.988 fused_ordering(173) 00:18:11.988 fused_ordering(174) 00:18:11.988 fused_ordering(175) 00:18:11.988 fused_ordering(176) 00:18:11.988 fused_ordering(177) 00:18:11.988 fused_ordering(178) 00:18:11.988 fused_ordering(179) 00:18:11.988 fused_ordering(180) 00:18:11.988 fused_ordering(181) 00:18:11.988 fused_ordering(182) 00:18:11.988 fused_ordering(183) 00:18:11.988 fused_ordering(184) 00:18:11.988 fused_ordering(185) 00:18:11.988 fused_ordering(186) 00:18:11.988 fused_ordering(187) 00:18:11.988 fused_ordering(188) 00:18:11.988 fused_ordering(189) 00:18:11.988 fused_ordering(190) 00:18:11.988 fused_ordering(191) 00:18:11.988 fused_ordering(192) 00:18:11.988 fused_ordering(193) 00:18:11.988 fused_ordering(194) 00:18:11.988 fused_ordering(195) 00:18:11.988 fused_ordering(196) 00:18:11.988 fused_ordering(197) 00:18:11.988 fused_ordering(198) 00:18:11.988 fused_ordering(199) 00:18:11.988 fused_ordering(200) 00:18:11.988 fused_ordering(201) 00:18:11.988 fused_ordering(202) 00:18:11.988 fused_ordering(203) 00:18:11.988 fused_ordering(204) 00:18:11.988 fused_ordering(205) 00:18:11.988 fused_ordering(206) 00:18:11.988 fused_ordering(207) 00:18:11.988 fused_ordering(208) 00:18:11.988 fused_ordering(209) 00:18:11.988 fused_ordering(210) 00:18:11.988 fused_ordering(211) 00:18:11.988 fused_ordering(212) 00:18:11.988 fused_ordering(213) 00:18:11.988 fused_ordering(214) 00:18:11.988 fused_ordering(215) 00:18:11.988 fused_ordering(216) 00:18:11.988 fused_ordering(217) 00:18:11.988 fused_ordering(218) 00:18:11.988 fused_ordering(219) 00:18:11.988 fused_ordering(220) 00:18:11.988 fused_ordering(221) 00:18:11.988 fused_ordering(222) 00:18:11.988 fused_ordering(223) 00:18:11.988 fused_ordering(224) 00:18:11.988 fused_ordering(225) 00:18:11.988 fused_ordering(226) 00:18:11.988 fused_ordering(227) 00:18:11.988 fused_ordering(228) 00:18:11.988 fused_ordering(229) 00:18:11.988 fused_ordering(230) 00:18:11.988 fused_ordering(231) 00:18:11.988 fused_ordering(232) 00:18:11.988 fused_ordering(233) 00:18:11.988 fused_ordering(234) 00:18:11.988 fused_ordering(235) 00:18:11.988 fused_ordering(236) 00:18:11.988 fused_ordering(237) 00:18:11.988 fused_ordering(238) 00:18:11.988 fused_ordering(239) 00:18:11.988 fused_ordering(240) 00:18:11.988 fused_ordering(241) 00:18:11.988 fused_ordering(242) 00:18:11.988 fused_ordering(243) 00:18:11.988 fused_ordering(244) 00:18:11.988 fused_ordering(245) 00:18:11.988 fused_ordering(246) 00:18:11.988 fused_ordering(247) 00:18:11.988 fused_ordering(248) 00:18:11.988 fused_ordering(249) 00:18:11.988 fused_ordering(250) 00:18:11.988 fused_ordering(251) 00:18:11.988 fused_ordering(252) 00:18:11.988 fused_ordering(253) 00:18:11.988 fused_ordering(254) 00:18:11.988 fused_ordering(255) 00:18:11.988 fused_ordering(256) 00:18:11.988 fused_ordering(257) 00:18:11.988 fused_ordering(258) 00:18:11.988 fused_ordering(259) 00:18:11.988 fused_ordering(260) 00:18:11.988 fused_ordering(261) 00:18:11.988 fused_ordering(262) 00:18:11.988 fused_ordering(263) 00:18:11.988 fused_ordering(264) 00:18:11.988 fused_ordering(265) 00:18:11.988 fused_ordering(266) 00:18:11.988 fused_ordering(267) 00:18:11.988 fused_ordering(268) 00:18:11.988 fused_ordering(269) 00:18:11.988 fused_ordering(270) 00:18:11.988 fused_ordering(271) 00:18:11.988 fused_ordering(272) 00:18:11.988 fused_ordering(273) 00:18:11.988 fused_ordering(274) 00:18:11.988 fused_ordering(275) 00:18:11.988 fused_ordering(276) 00:18:11.988 fused_ordering(277) 00:18:11.988 fused_ordering(278) 00:18:11.988 fused_ordering(279) 00:18:11.988 fused_ordering(280) 00:18:11.988 fused_ordering(281) 00:18:11.988 fused_ordering(282) 00:18:11.988 fused_ordering(283) 00:18:11.988 fused_ordering(284) 00:18:11.988 fused_ordering(285) 00:18:11.988 fused_ordering(286) 00:18:11.988 fused_ordering(287) 00:18:11.988 fused_ordering(288) 00:18:11.988 fused_ordering(289) 00:18:11.988 fused_ordering(290) 00:18:11.988 fused_ordering(291) 00:18:11.988 fused_ordering(292) 00:18:11.988 fused_ordering(293) 00:18:11.988 fused_ordering(294) 00:18:11.988 fused_ordering(295) 00:18:11.988 fused_ordering(296) 00:18:11.988 fused_ordering(297) 00:18:11.988 fused_ordering(298) 00:18:11.988 fused_ordering(299) 00:18:11.989 fused_ordering(300) 00:18:11.989 fused_ordering(301) 00:18:11.989 fused_ordering(302) 00:18:11.989 fused_ordering(303) 00:18:11.989 fused_ordering(304) 00:18:11.989 fused_ordering(305) 00:18:11.989 fused_ordering(306) 00:18:11.989 fused_ordering(307) 00:18:11.989 fused_ordering(308) 00:18:11.989 fused_ordering(309) 00:18:11.989 fused_ordering(310) 00:18:11.989 fused_ordering(311) 00:18:11.989 fused_ordering(312) 00:18:11.989 fused_ordering(313) 00:18:11.989 fused_ordering(314) 00:18:11.989 fused_ordering(315) 00:18:11.989 fused_ordering(316) 00:18:11.989 fused_ordering(317) 00:18:11.989 fused_ordering(318) 00:18:11.989 fused_ordering(319) 00:18:11.989 fused_ordering(320) 00:18:11.989 fused_ordering(321) 00:18:11.989 fused_ordering(322) 00:18:11.989 fused_ordering(323) 00:18:11.989 fused_ordering(324) 00:18:11.989 fused_ordering(325) 00:18:11.989 fused_ordering(326) 00:18:11.989 fused_ordering(327) 00:18:11.989 fused_ordering(328) 00:18:11.989 fused_ordering(329) 00:18:11.989 fused_ordering(330) 00:18:11.989 fused_ordering(331) 00:18:11.989 fused_ordering(332) 00:18:11.989 fused_ordering(333) 00:18:11.989 fused_ordering(334) 00:18:11.989 fused_ordering(335) 00:18:11.989 fused_ordering(336) 00:18:11.989 fused_ordering(337) 00:18:11.989 fused_ordering(338) 00:18:11.989 fused_ordering(339) 00:18:11.989 fused_ordering(340) 00:18:11.989 fused_ordering(341) 00:18:11.989 fused_ordering(342) 00:18:11.989 fused_ordering(343) 00:18:11.989 fused_ordering(344) 00:18:11.989 fused_ordering(345) 00:18:11.989 fused_ordering(346) 00:18:11.989 fused_ordering(347) 00:18:11.989 fused_ordering(348) 00:18:11.989 fused_ordering(349) 00:18:11.989 fused_ordering(350) 00:18:11.989 fused_ordering(351) 00:18:11.989 fused_ordering(352) 00:18:11.989 fused_ordering(353) 00:18:11.989 fused_ordering(354) 00:18:11.989 fused_ordering(355) 00:18:11.989 fused_ordering(356) 00:18:11.989 fused_ordering(357) 00:18:11.989 fused_ordering(358) 00:18:11.989 fused_ordering(359) 00:18:11.989 fused_ordering(360) 00:18:11.989 fused_ordering(361) 00:18:11.989 fused_ordering(362) 00:18:11.989 fused_ordering(363) 00:18:11.989 fused_ordering(364) 00:18:11.989 fused_ordering(365) 00:18:11.989 fused_ordering(366) 00:18:11.989 fused_ordering(367) 00:18:11.989 fused_ordering(368) 00:18:11.989 fused_ordering(369) 00:18:11.989 fused_ordering(370) 00:18:11.989 fused_ordering(371) 00:18:11.989 fused_ordering(372) 00:18:11.989 fused_ordering(373) 00:18:11.989 fused_ordering(374) 00:18:11.989 fused_ordering(375) 00:18:11.989 fused_ordering(376) 00:18:11.989 fused_ordering(377) 00:18:11.989 fused_ordering(378) 00:18:11.989 fused_ordering(379) 00:18:11.989 fused_ordering(380) 00:18:11.989 fused_ordering(381) 00:18:11.989 fused_ordering(382) 00:18:11.989 fused_ordering(383) 00:18:11.989 fused_ordering(384) 00:18:11.989 fused_ordering(385) 00:18:11.989 fused_ordering(386) 00:18:11.989 fused_ordering(387) 00:18:11.989 fused_ordering(388) 00:18:11.989 fused_ordering(389) 00:18:11.989 fused_ordering(390) 00:18:11.989 fused_ordering(391) 00:18:11.989 fused_ordering(392) 00:18:11.989 fused_ordering(393) 00:18:11.989 fused_ordering(394) 00:18:11.989 fused_ordering(395) 00:18:11.989 fused_ordering(396) 00:18:11.989 fused_ordering(397) 00:18:11.989 fused_ordering(398) 00:18:11.989 fused_ordering(399) 00:18:11.989 fused_ordering(400) 00:18:11.989 fused_ordering(401) 00:18:11.989 fused_ordering(402) 00:18:11.989 fused_ordering(403) 00:18:11.989 fused_ordering(404) 00:18:11.989 fused_ordering(405) 00:18:11.989 fused_ordering(406) 00:18:11.989 fused_ordering(407) 00:18:11.989 fused_ordering(408) 00:18:11.989 fused_ordering(409) 00:18:11.989 fused_ordering(410) 00:18:12.250 fused_ordering(411) 00:18:12.250 fused_ordering(412) 00:18:12.250 fused_ordering(413) 00:18:12.250 fused_ordering(414) 00:18:12.250 fused_ordering(415) 00:18:12.250 fused_ordering(416) 00:18:12.250 fused_ordering(417) 00:18:12.250 fused_ordering(418) 00:18:12.250 fused_ordering(419) 00:18:12.250 fused_ordering(420) 00:18:12.250 fused_ordering(421) 00:18:12.250 fused_ordering(422) 00:18:12.250 fused_ordering(423) 00:18:12.250 fused_ordering(424) 00:18:12.250 fused_ordering(425) 00:18:12.250 fused_ordering(426) 00:18:12.250 fused_ordering(427) 00:18:12.250 fused_ordering(428) 00:18:12.250 fused_ordering(429) 00:18:12.250 fused_ordering(430) 00:18:12.250 fused_ordering(431) 00:18:12.250 fused_ordering(432) 00:18:12.250 fused_ordering(433) 00:18:12.250 fused_ordering(434) 00:18:12.250 fused_ordering(435) 00:18:12.250 fused_ordering(436) 00:18:12.250 fused_ordering(437) 00:18:12.250 fused_ordering(438) 00:18:12.250 fused_ordering(439) 00:18:12.250 fused_ordering(440) 00:18:12.250 fused_ordering(441) 00:18:12.250 fused_ordering(442) 00:18:12.250 fused_ordering(443) 00:18:12.250 fused_ordering(444) 00:18:12.250 fused_ordering(445) 00:18:12.250 fused_ordering(446) 00:18:12.250 fused_ordering(447) 00:18:12.250 fused_ordering(448) 00:18:12.250 fused_ordering(449) 00:18:12.250 fused_ordering(450) 00:18:12.250 fused_ordering(451) 00:18:12.250 fused_ordering(452) 00:18:12.250 fused_ordering(453) 00:18:12.250 fused_ordering(454) 00:18:12.250 fused_ordering(455) 00:18:12.250 fused_ordering(456) 00:18:12.250 fused_ordering(457) 00:18:12.250 fused_ordering(458) 00:18:12.250 fused_ordering(459) 00:18:12.250 fused_ordering(460) 00:18:12.250 fused_ordering(461) 00:18:12.250 fused_ordering(462) 00:18:12.250 fused_ordering(463) 00:18:12.250 fused_ordering(464) 00:18:12.250 fused_ordering(465) 00:18:12.250 fused_ordering(466) 00:18:12.250 fused_ordering(467) 00:18:12.250 fused_ordering(468) 00:18:12.250 fused_ordering(469) 00:18:12.250 fused_ordering(470) 00:18:12.250 fused_ordering(471) 00:18:12.250 fused_ordering(472) 00:18:12.250 fused_ordering(473) 00:18:12.250 fused_ordering(474) 00:18:12.250 fused_ordering(475) 00:18:12.250 fused_ordering(476) 00:18:12.250 fused_ordering(477) 00:18:12.250 fused_ordering(478) 00:18:12.250 fused_ordering(479) 00:18:12.250 fused_ordering(480) 00:18:12.250 fused_ordering(481) 00:18:12.250 fused_ordering(482) 00:18:12.250 fused_ordering(483) 00:18:12.250 fused_ordering(484) 00:18:12.250 fused_ordering(485) 00:18:12.250 fused_ordering(486) 00:18:12.250 fused_ordering(487) 00:18:12.250 fused_ordering(488) 00:18:12.250 fused_ordering(489) 00:18:12.250 fused_ordering(490) 00:18:12.250 fused_ordering(491) 00:18:12.250 fused_ordering(492) 00:18:12.250 fused_ordering(493) 00:18:12.250 fused_ordering(494) 00:18:12.250 fused_ordering(495) 00:18:12.250 fused_ordering(496) 00:18:12.250 fused_ordering(497) 00:18:12.250 fused_ordering(498) 00:18:12.250 fused_ordering(499) 00:18:12.250 fused_ordering(500) 00:18:12.250 fused_ordering(501) 00:18:12.250 fused_ordering(502) 00:18:12.250 fused_ordering(503) 00:18:12.250 fused_ordering(504) 00:18:12.250 fused_ordering(505) 00:18:12.250 fused_ordering(506) 00:18:12.250 fused_ordering(507) 00:18:12.250 fused_ordering(508) 00:18:12.250 fused_ordering(509) 00:18:12.250 fused_ordering(510) 00:18:12.250 fused_ordering(511) 00:18:12.250 fused_ordering(512) 00:18:12.250 fused_ordering(513) 00:18:12.250 fused_ordering(514) 00:18:12.250 fused_ordering(515) 00:18:12.250 fused_ordering(516) 00:18:12.250 fused_ordering(517) 00:18:12.250 fused_ordering(518) 00:18:12.250 fused_ordering(519) 00:18:12.250 fused_ordering(520) 00:18:12.250 fused_ordering(521) 00:18:12.250 fused_ordering(522) 00:18:12.250 fused_ordering(523) 00:18:12.250 fused_ordering(524) 00:18:12.250 fused_ordering(525) 00:18:12.250 fused_ordering(526) 00:18:12.250 fused_ordering(527) 00:18:12.251 fused_ordering(528) 00:18:12.251 fused_ordering(529) 00:18:12.251 fused_ordering(530) 00:18:12.251 fused_ordering(531) 00:18:12.251 fused_ordering(532) 00:18:12.251 fused_ordering(533) 00:18:12.251 fused_ordering(534) 00:18:12.251 fused_ordering(535) 00:18:12.251 fused_ordering(536) 00:18:12.251 fused_ordering(537) 00:18:12.251 fused_ordering(538) 00:18:12.251 fused_ordering(539) 00:18:12.251 fused_ordering(540) 00:18:12.251 fused_ordering(541) 00:18:12.251 fused_ordering(542) 00:18:12.251 fused_ordering(543) 00:18:12.251 fused_ordering(544) 00:18:12.251 fused_ordering(545) 00:18:12.251 fused_ordering(546) 00:18:12.251 fused_ordering(547) 00:18:12.251 fused_ordering(548) 00:18:12.251 fused_ordering(549) 00:18:12.251 fused_ordering(550) 00:18:12.251 fused_ordering(551) 00:18:12.251 fused_ordering(552) 00:18:12.251 fused_ordering(553) 00:18:12.251 fused_ordering(554) 00:18:12.251 fused_ordering(555) 00:18:12.251 fused_ordering(556) 00:18:12.251 fused_ordering(557) 00:18:12.251 fused_ordering(558) 00:18:12.251 fused_ordering(559) 00:18:12.251 fused_ordering(560) 00:18:12.251 fused_ordering(561) 00:18:12.251 fused_ordering(562) 00:18:12.251 fused_ordering(563) 00:18:12.251 fused_ordering(564) 00:18:12.251 fused_ordering(565) 00:18:12.251 fused_ordering(566) 00:18:12.251 fused_ordering(567) 00:18:12.251 fused_ordering(568) 00:18:12.251 fused_ordering(569) 00:18:12.251 fused_ordering(570) 00:18:12.251 fused_ordering(571) 00:18:12.251 fused_ordering(572) 00:18:12.251 fused_ordering(573) 00:18:12.251 fused_ordering(574) 00:18:12.251 fused_ordering(575) 00:18:12.251 fused_ordering(576) 00:18:12.251 fused_ordering(577) 00:18:12.251 fused_ordering(578) 00:18:12.251 fused_ordering(579) 00:18:12.251 fused_ordering(580) 00:18:12.251 fused_ordering(581) 00:18:12.251 fused_ordering(582) 00:18:12.251 fused_ordering(583) 00:18:12.251 fused_ordering(584) 00:18:12.251 fused_ordering(585) 00:18:12.251 fused_ordering(586) 00:18:12.251 fused_ordering(587) 00:18:12.251 fused_ordering(588) 00:18:12.251 fused_ordering(589) 00:18:12.251 fused_ordering(590) 00:18:12.251 fused_ordering(591) 00:18:12.251 fused_ordering(592) 00:18:12.251 fused_ordering(593) 00:18:12.251 fused_ordering(594) 00:18:12.251 fused_ordering(595) 00:18:12.251 fused_ordering(596) 00:18:12.251 fused_ordering(597) 00:18:12.251 fused_ordering(598) 00:18:12.251 fused_ordering(599) 00:18:12.251 fused_ordering(600) 00:18:12.251 fused_ordering(601) 00:18:12.251 fused_ordering(602) 00:18:12.251 fused_ordering(603) 00:18:12.251 fused_ordering(604) 00:18:12.251 fused_ordering(605) 00:18:12.251 fused_ordering(606) 00:18:12.251 fused_ordering(607) 00:18:12.251 fused_ordering(608) 00:18:12.251 fused_ordering(609) 00:18:12.251 fused_ordering(610) 00:18:12.251 fused_ordering(611) 00:18:12.251 fused_ordering(612) 00:18:12.251 fused_ordering(613) 00:18:12.251 fused_ordering(614) 00:18:12.251 fused_ordering(615) 00:18:12.512 fused_ordering(616) 00:18:12.512 fused_ordering(617) 00:18:12.512 fused_ordering(618) 00:18:12.512 fused_ordering(619) 00:18:12.512 fused_ordering(620) 00:18:12.512 fused_ordering(621) 00:18:12.512 fused_ordering(622) 00:18:12.512 fused_ordering(623) 00:18:12.512 fused_ordering(624) 00:18:12.512 fused_ordering(625) 00:18:12.512 fused_ordering(626) 00:18:12.512 fused_ordering(627) 00:18:12.512 fused_ordering(628) 00:18:12.512 fused_ordering(629) 00:18:12.512 fused_ordering(630) 00:18:12.512 fused_ordering(631) 00:18:12.512 fused_ordering(632) 00:18:12.512 fused_ordering(633) 00:18:12.512 fused_ordering(634) 00:18:12.512 fused_ordering(635) 00:18:12.512 fused_ordering(636) 00:18:12.512 fused_ordering(637) 00:18:12.512 fused_ordering(638) 00:18:12.512 fused_ordering(639) 00:18:12.512 fused_ordering(640) 00:18:12.512 fused_ordering(641) 00:18:12.512 fused_ordering(642) 00:18:12.512 fused_ordering(643) 00:18:12.512 fused_ordering(644) 00:18:12.512 fused_ordering(645) 00:18:12.512 fused_ordering(646) 00:18:12.512 fused_ordering(647) 00:18:12.512 fused_ordering(648) 00:18:12.512 fused_ordering(649) 00:18:12.512 fused_ordering(650) 00:18:12.512 fused_ordering(651) 00:18:12.512 fused_ordering(652) 00:18:12.512 fused_ordering(653) 00:18:12.512 fused_ordering(654) 00:18:12.512 fused_ordering(655) 00:18:12.512 fused_ordering(656) 00:18:12.512 fused_ordering(657) 00:18:12.512 fused_ordering(658) 00:18:12.512 fused_ordering(659) 00:18:12.512 fused_ordering(660) 00:18:12.512 fused_ordering(661) 00:18:12.512 fused_ordering(662) 00:18:12.512 fused_ordering(663) 00:18:12.512 fused_ordering(664) 00:18:12.512 fused_ordering(665) 00:18:12.512 fused_ordering(666) 00:18:12.512 fused_ordering(667) 00:18:12.512 fused_ordering(668) 00:18:12.512 fused_ordering(669) 00:18:12.512 fused_ordering(670) 00:18:12.512 fused_ordering(671) 00:18:12.512 fused_ordering(672) 00:18:12.512 fused_ordering(673) 00:18:12.512 fused_ordering(674) 00:18:12.512 fused_ordering(675) 00:18:12.512 fused_ordering(676) 00:18:12.512 fused_ordering(677) 00:18:12.512 fused_ordering(678) 00:18:12.512 fused_ordering(679) 00:18:12.512 fused_ordering(680) 00:18:12.512 fused_ordering(681) 00:18:12.512 fused_ordering(682) 00:18:12.512 fused_ordering(683) 00:18:12.512 fused_ordering(684) 00:18:12.512 fused_ordering(685) 00:18:12.512 fused_ordering(686) 00:18:12.512 fused_ordering(687) 00:18:12.512 fused_ordering(688) 00:18:12.512 fused_ordering(689) 00:18:12.512 fused_ordering(690) 00:18:12.512 fused_ordering(691) 00:18:12.512 fused_ordering(692) 00:18:12.512 fused_ordering(693) 00:18:12.512 fused_ordering(694) 00:18:12.512 fused_ordering(695) 00:18:12.512 fused_ordering(696) 00:18:12.512 fused_ordering(697) 00:18:12.512 fused_ordering(698) 00:18:12.512 fused_ordering(699) 00:18:12.512 fused_ordering(700) 00:18:12.512 fused_ordering(701) 00:18:12.512 fused_ordering(702) 00:18:12.512 fused_ordering(703) 00:18:12.512 fused_ordering(704) 00:18:12.512 fused_ordering(705) 00:18:12.512 fused_ordering(706) 00:18:12.512 fused_ordering(707) 00:18:12.512 fused_ordering(708) 00:18:12.512 fused_ordering(709) 00:18:12.512 fused_ordering(710) 00:18:12.512 fused_ordering(711) 00:18:12.512 fused_ordering(712) 00:18:12.512 fused_ordering(713) 00:18:12.512 fused_ordering(714) 00:18:12.512 fused_ordering(715) 00:18:12.512 fused_ordering(716) 00:18:12.512 fused_ordering(717) 00:18:12.512 fused_ordering(718) 00:18:12.512 fused_ordering(719) 00:18:12.513 fused_ordering(720) 00:18:12.513 fused_ordering(721) 00:18:12.513 fused_ordering(722) 00:18:12.513 fused_ordering(723) 00:18:12.513 fused_ordering(724) 00:18:12.513 fused_ordering(725) 00:18:12.513 fused_ordering(726) 00:18:12.513 fused_ordering(727) 00:18:12.513 fused_ordering(728) 00:18:12.513 fused_ordering(729) 00:18:12.513 fused_ordering(730) 00:18:12.513 fused_ordering(731) 00:18:12.513 fused_ordering(732) 00:18:12.513 fused_ordering(733) 00:18:12.513 fused_ordering(734) 00:18:12.513 fused_ordering(735) 00:18:12.513 fused_ordering(736) 00:18:12.513 fused_ordering(737) 00:18:12.513 fused_ordering(738) 00:18:12.513 fused_ordering(739) 00:18:12.513 fused_ordering(740) 00:18:12.513 fused_ordering(741) 00:18:12.513 fused_ordering(742) 00:18:12.513 fused_ordering(743) 00:18:12.513 fused_ordering(744) 00:18:12.513 fused_ordering(745) 00:18:12.513 fused_ordering(746) 00:18:12.513 fused_ordering(747) 00:18:12.513 fused_ordering(748) 00:18:12.513 fused_ordering(749) 00:18:12.513 fused_ordering(750) 00:18:12.513 fused_ordering(751) 00:18:12.513 fused_ordering(752) 00:18:12.513 fused_ordering(753) 00:18:12.513 fused_ordering(754) 00:18:12.513 fused_ordering(755) 00:18:12.513 fused_ordering(756) 00:18:12.513 fused_ordering(757) 00:18:12.513 fused_ordering(758) 00:18:12.513 fused_ordering(759) 00:18:12.513 fused_ordering(760) 00:18:12.513 fused_ordering(761) 00:18:12.513 fused_ordering(762) 00:18:12.513 fused_ordering(763) 00:18:12.513 fused_ordering(764) 00:18:12.513 fused_ordering(765) 00:18:12.513 fused_ordering(766) 00:18:12.513 fused_ordering(767) 00:18:12.513 fused_ordering(768) 00:18:12.513 fused_ordering(769) 00:18:12.513 fused_ordering(770) 00:18:12.513 fused_ordering(771) 00:18:12.513 fused_ordering(772) 00:18:12.513 fused_ordering(773) 00:18:12.513 fused_ordering(774) 00:18:12.513 fused_ordering(775) 00:18:12.513 fused_ordering(776) 00:18:12.513 fused_ordering(777) 00:18:12.513 fused_ordering(778) 00:18:12.513 fused_ordering(779) 00:18:12.513 fused_ordering(780) 00:18:12.513 fused_ordering(781) 00:18:12.513 fused_ordering(782) 00:18:12.513 fused_ordering(783) 00:18:12.513 fused_ordering(784) 00:18:12.513 fused_ordering(785) 00:18:12.513 fused_ordering(786) 00:18:12.513 fused_ordering(787) 00:18:12.513 fused_ordering(788) 00:18:12.513 fused_ordering(789) 00:18:12.513 fused_ordering(790) 00:18:12.513 fused_ordering(791) 00:18:12.513 fused_ordering(792) 00:18:12.513 fused_ordering(793) 00:18:12.513 fused_ordering(794) 00:18:12.513 fused_ordering(795) 00:18:12.513 fused_ordering(796) 00:18:12.513 fused_ordering(797) 00:18:12.513 fused_ordering(798) 00:18:12.513 fused_ordering(799) 00:18:12.513 fused_ordering(800) 00:18:12.513 fused_ordering(801) 00:18:12.513 fused_ordering(802) 00:18:12.513 fused_ordering(803) 00:18:12.513 fused_ordering(804) 00:18:12.513 fused_ordering(805) 00:18:12.513 fused_ordering(806) 00:18:12.513 fused_ordering(807) 00:18:12.513 fused_ordering(808) 00:18:12.513 fused_ordering(809) 00:18:12.513 fused_ordering(810) 00:18:12.513 fused_ordering(811) 00:18:12.513 fused_ordering(812) 00:18:12.513 fused_ordering(813) 00:18:12.513 fused_ordering(814) 00:18:12.513 fused_ordering(815) 00:18:12.513 fused_ordering(816) 00:18:12.513 fused_ordering(817) 00:18:12.513 fused_ordering(818) 00:18:12.513 fused_ordering(819) 00:18:12.513 fused_ordering(820) 00:18:12.513 fused_ordering(821) 00:18:12.513 fused_ordering(822) 00:18:12.513 fused_ordering(823) 00:18:12.513 fused_ordering(824) 00:18:12.513 fused_ordering(825) 00:18:12.513 fused_ordering(826) 00:18:12.513 fused_ordering(827) 00:18:12.513 fused_ordering(828) 00:18:12.513 fused_ordering(829) 00:18:12.513 fused_ordering(830) 00:18:12.513 fused_ordering(831) 00:18:12.513 fused_ordering(832) 00:18:12.513 fused_ordering(833) 00:18:12.513 fused_ordering(834) 00:18:12.513 fused_ordering(835) 00:18:12.513 fused_ordering(836) 00:18:12.513 fused_ordering(837) 00:18:12.513 fused_ordering(838) 00:18:12.513 fused_ordering(839) 00:18:12.513 fused_ordering(840) 00:18:12.513 fused_ordering(841) 00:18:12.513 fused_ordering(842) 00:18:12.513 fused_ordering(843) 00:18:12.513 fused_ordering(844) 00:18:12.513 fused_ordering(845) 00:18:12.513 fused_ordering(846) 00:18:12.513 fused_ordering(847) 00:18:12.513 fused_ordering(848) 00:18:12.513 fused_ordering(849) 00:18:12.513 fused_ordering(850) 00:18:12.513 fused_ordering(851) 00:18:12.513 fused_ordering(852) 00:18:12.513 fused_ordering(853) 00:18:12.513 fused_ordering(854) 00:18:12.513 fused_ordering(855) 00:18:12.513 fused_ordering(856) 00:18:12.513 fused_ordering(857) 00:18:12.513 fused_ordering(858) 00:18:12.513 fused_ordering(859) 00:18:12.513 fused_ordering(860) 00:18:12.513 fused_ordering(861) 00:18:12.513 fused_ordering(862) 00:18:12.513 fused_ordering(863) 00:18:12.513 fused_ordering(864) 00:18:12.513 fused_ordering(865) 00:18:12.513 fused_ordering(866) 00:18:12.513 fused_ordering(867) 00:18:12.513 fused_ordering(868) 00:18:12.513 fused_ordering(869) 00:18:12.513 fused_ordering(870) 00:18:12.513 fused_ordering(871) 00:18:12.513 fused_ordering(872) 00:18:12.513 fused_ordering(873) 00:18:12.513 fused_ordering(874) 00:18:12.513 fused_ordering(875) 00:18:12.513 fused_ordering(876) 00:18:12.513 fused_ordering(877) 00:18:12.513 fused_ordering(878) 00:18:12.513 fused_ordering(879) 00:18:12.513 fused_ordering(880) 00:18:12.513 fused_ordering(881) 00:18:12.513 fused_ordering(882) 00:18:12.513 fused_ordering(883) 00:18:12.513 fused_ordering(884) 00:18:12.513 fused_ordering(885) 00:18:12.513 fused_ordering(886) 00:18:12.513 fused_ordering(887) 00:18:12.513 fused_ordering(888) 00:18:12.513 fused_ordering(889) 00:18:12.513 fused_ordering(890) 00:18:12.513 fused_ordering(891) 00:18:12.513 fused_ordering(892) 00:18:12.513 fused_ordering(893) 00:18:12.513 fused_ordering(894) 00:18:12.513 fused_ordering(895) 00:18:12.513 fused_ordering(896) 00:18:12.513 fused_ordering(897) 00:18:12.513 fused_ordering(898) 00:18:12.513 fused_ordering(899) 00:18:12.513 fused_ordering(900) 00:18:12.513 fused_ordering(901) 00:18:12.513 fused_ordering(902) 00:18:12.513 fused_ordering(903) 00:18:12.513 fused_ordering(904) 00:18:12.513 fused_ordering(905) 00:18:12.513 fused_ordering(906) 00:18:12.513 fused_ordering(907) 00:18:12.513 fused_ordering(908) 00:18:12.513 fused_ordering(909) 00:18:12.513 fused_ordering(910) 00:18:12.513 fused_ordering(911) 00:18:12.513 fused_ordering(912) 00:18:12.513 fused_ordering(913) 00:18:12.513 fused_ordering(914) 00:18:12.513 fused_ordering(915) 00:18:12.513 fused_ordering(916) 00:18:12.513 fused_ordering(917) 00:18:12.513 fused_ordering(918) 00:18:12.513 fused_ordering(919) 00:18:12.513 fused_ordering(920) 00:18:12.513 fused_ordering(921) 00:18:12.513 fused_ordering(922) 00:18:12.513 fused_ordering(923) 00:18:12.513 fused_ordering(924) 00:18:12.513 fused_ordering(925) 00:18:12.513 fused_ordering(926) 00:18:12.513 fused_ordering(927) 00:18:12.513 fused_ordering(928) 00:18:12.513 fused_ordering(929) 00:18:12.513 fused_ordering(930) 00:18:12.513 fused_ordering(931) 00:18:12.513 fused_ordering(932) 00:18:12.513 fused_ordering(933) 00:18:12.513 fused_ordering(934) 00:18:12.513 fused_ordering(935) 00:18:12.513 fused_ordering(936) 00:18:12.513 fused_ordering(937) 00:18:12.513 fused_ordering(938) 00:18:12.513 fused_ordering(939) 00:18:12.513 fused_ordering(940) 00:18:12.513 fused_ordering(941) 00:18:12.513 fused_ordering(942) 00:18:12.513 fused_ordering(943) 00:18:12.513 fused_ordering(944) 00:18:12.513 fused_ordering(945) 00:18:12.513 fused_ordering(946) 00:18:12.513 fused_ordering(947) 00:18:12.513 fused_ordering(948) 00:18:12.513 fused_ordering(949) 00:18:12.513 fused_ordering(950) 00:18:12.513 fused_ordering(951) 00:18:12.513 fused_ordering(952) 00:18:12.513 fused_ordering(953) 00:18:12.513 fused_ordering(954) 00:18:12.513 fused_ordering(955) 00:18:12.513 fused_ordering(956) 00:18:12.513 fused_ordering(957) 00:18:12.513 fused_ordering(958) 00:18:12.513 fused_ordering(959) 00:18:12.513 fused_ordering(960) 00:18:12.513 fused_ordering(961) 00:18:12.513 fused_ordering(962) 00:18:12.513 fused_ordering(963) 00:18:12.513 fused_ordering(964) 00:18:12.513 fused_ordering(965) 00:18:12.513 fused_ordering(966) 00:18:12.513 fused_ordering(967) 00:18:12.513 fused_ordering(968) 00:18:12.513 fused_ordering(969) 00:18:12.513 fused_ordering(970) 00:18:12.513 fused_ordering(971) 00:18:12.513 fused_ordering(972) 00:18:12.513 fused_ordering(973) 00:18:12.513 fused_ordering(974) 00:18:12.513 fused_ordering(975) 00:18:12.513 fused_ordering(976) 00:18:12.513 fused_ordering(977) 00:18:12.513 fused_ordering(978) 00:18:12.513 fused_ordering(979) 00:18:12.513 fused_ordering(980) 00:18:12.513 fused_ordering(981) 00:18:12.513 fused_ordering(982) 00:18:12.513 fused_ordering(983) 00:18:12.513 fused_ordering(984) 00:18:12.513 fused_ordering(985) 00:18:12.513 fused_ordering(986) 00:18:12.513 fused_ordering(987) 00:18:12.513 fused_ordering(988) 00:18:12.513 fused_ordering(989) 00:18:12.513 fused_ordering(990) 00:18:12.513 fused_ordering(991) 00:18:12.513 fused_ordering(992) 00:18:12.513 fused_ordering(993) 00:18:12.513 fused_ordering(994) 00:18:12.513 fused_ordering(995) 00:18:12.513 fused_ordering(996) 00:18:12.513 fused_ordering(997) 00:18:12.514 fused_ordering(998) 00:18:12.514 fused_ordering(999) 00:18:12.514 fused_ordering(1000) 00:18:12.514 fused_ordering(1001) 00:18:12.514 fused_ordering(1002) 00:18:12.514 fused_ordering(1003) 00:18:12.514 fused_ordering(1004) 00:18:12.514 fused_ordering(1005) 00:18:12.514 fused_ordering(1006) 00:18:12.514 fused_ordering(1007) 00:18:12.514 fused_ordering(1008) 00:18:12.514 fused_ordering(1009) 00:18:12.514 fused_ordering(1010) 00:18:12.514 fused_ordering(1011) 00:18:12.514 fused_ordering(1012) 00:18:12.514 fused_ordering(1013) 00:18:12.514 fused_ordering(1014) 00:18:12.514 fused_ordering(1015) 00:18:12.514 fused_ordering(1016) 00:18:12.514 fused_ordering(1017) 00:18:12.514 fused_ordering(1018) 00:18:12.514 fused_ordering(1019) 00:18:12.514 fused_ordering(1020) 00:18:12.514 fused_ordering(1021) 00:18:12.514 fused_ordering(1022) 00:18:12.514 fused_ordering(1023) 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:12.775 rmmod nvme_rdma 00:18:12.775 rmmod nvme_fabrics 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2068285 ']' 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2068285 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 2068285 ']' 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 2068285 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2068285 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2068285' 00:18:12.775 killing process with pid 2068285 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 2068285 00:18:12.775 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 2068285 00:18:13.037 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.037 13:46:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:13.037 00:18:13.037 real 0m9.420s 00:18:13.037 user 0m5.262s 00:18:13.037 sys 0m5.544s 00:18:13.037 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:13.037 13:46:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:13.037 ************************************ 00:18:13.037 END TEST nvmf_fused_ordering 00:18:13.037 ************************************ 00:18:13.037 13:46:05 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:18:13.037 13:46:05 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:13.037 13:46:05 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:13.037 13:46:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:13.037 ************************************ 00:18:13.037 START TEST nvmf_delete_subsystem 00:18:13.037 ************************************ 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:18:13.037 * Looking for test storage... 00:18:13.037 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.037 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.038 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.299 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:13.299 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:13.299 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:18:13.299 13:46:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:19.890 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:18:19.891 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:18:19.891 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:18:19.891 Found net devices under 0000:98:00.0: mlx_0_0 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:18:19.891 Found net devices under 0000:98:00.1: mlx_0_1 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:19.891 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:20.152 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:20.152 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:18:20.152 altname enp152s0f0np0 00:18:20.152 altname ens817f0np0 00:18:20.152 inet 192.168.100.8/24 scope global mlx_0_0 00:18:20.152 valid_lft forever preferred_lft forever 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:20.152 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:20.152 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:18:20.152 altname enp152s0f1np1 00:18:20.152 altname ens817f1np1 00:18:20.152 inet 192.168.100.9/24 scope global mlx_0_1 00:18:20.152 valid_lft forever preferred_lft forever 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:20.152 192.168.100.9' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:20.152 192.168.100.9' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:20.152 192.168.100.9' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2072507 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2072507 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 2072507 ']' 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:20.152 13:46:12 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:20.152 [2024-06-11 13:46:13.000955] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:18:20.153 [2024-06-11 13:46:13.001006] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.153 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.153 [2024-06-11 13:46:13.061421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:20.413 [2024-06-11 13:46:13.126118] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.413 [2024-06-11 13:46:13.126156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.413 [2024-06-11 13:46:13.126163] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.413 [2024-06-11 13:46:13.126170] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.413 [2024-06-11 13:46:13.126176] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.413 [2024-06-11 13:46:13.126324] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.413 [2024-06-11 13:46:13.126325] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.988 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:20.988 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:18:20.988 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.988 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:20.988 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:20.988 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.988 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:20.988 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.989 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:20.989 [2024-06-11 13:46:13.831491] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ede7b0/0x1ee2ca0) succeed. 00:18:20.989 [2024-06-11 13:46:13.844801] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1edfcb0/0x1f24330) succeed. 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.278 [2024-06-11 13:46:13.929981] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.278 NULL1 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.278 Delay0 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2072702 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:18:21.278 13:46:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:18:21.278 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.278 [2024-06-11 13:46:14.038512] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:23.216 13:46:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.216 13:46:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.216 13:46:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:24.598 NVMe io qpair process completion error 00:18:24.598 NVMe io qpair process completion error 00:18:24.598 NVMe io qpair process completion error 00:18:24.598 NVMe io qpair process completion error 00:18:24.598 NVMe io qpair process completion error 00:18:24.598 NVMe io qpair process completion error 00:18:24.598 13:46:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.598 13:46:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:18:24.598 13:46:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2072702 00:18:24.598 13:46:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:18:24.858 13:46:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:18:24.858 13:46:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2072702 00:18:24.858 13:46:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Write completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.430 starting I/O failed: -6 00:18:25.430 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 starting I/O failed: -6 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Read completed with error (sct=0, sc=8) 00:18:25.431 Write completed with error (sct=0, sc=8) 00:18:25.431 Initializing NVMe Controllers 00:18:25.431 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:25.431 Controller IO queue size 128, less than required. 00:18:25.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:25.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:18:25.431 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:18:25.431 Initialization complete. Launching workers. 00:18:25.431 ======================================================== 00:18:25.431 Latency(us) 00:18:25.431 Device Information : IOPS MiB/s Average min max 00:18:25.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.69 0.04 1590805.39 1000132.17 2966961.67 00:18:25.431 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.69 0.04 1592285.41 1001691.11 2967823.68 00:18:25.431 ======================================================== 00:18:25.431 Total : 161.38 0.08 1591545.40 1000132.17 2967823.68 00:18:25.431 00:18:25.431 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:18:25.431 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2072702 00:18:25.431 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:18:25.431 [2024-06-11 13:46:18.147682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:25.431 [2024-06-11 13:46:18.147712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:25.431 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2072702 00:18:26.003 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2072702) - No such process 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2072702 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2072702 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2072702 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:26.003 [2024-06-11 13:46:18.676436] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2073716 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:26.003 13:46:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:26.003 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.003 [2024-06-11 13:46:18.771479] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:26.576 13:46:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:26.576 13:46:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:26.576 13:46:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:26.836 13:46:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:26.836 13:46:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:26.836 13:46:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:27.407 13:46:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:27.407 13:46:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:27.407 13:46:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:27.977 13:46:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:27.977 13:46:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:27.977 13:46:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:28.548 13:46:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:28.548 13:46:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:28.548 13:46:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:28.809 13:46:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:28.809 13:46:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:28.809 13:46:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:29.378 13:46:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:29.378 13:46:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:29.378 13:46:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:29.949 13:46:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:29.949 13:46:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:29.949 13:46:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:30.520 13:46:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:30.520 13:46:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:30.520 13:46:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:31.092 13:46:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:31.092 13:46:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:31.092 13:46:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:31.353 13:46:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:31.353 13:46:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:31.353 13:46:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:31.923 13:46:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:31.923 13:46:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:31.923 13:46:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:32.493 13:46:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:32.493 13:46:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:32.493 13:46:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:33.065 13:46:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:33.065 13:46:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:33.065 13:46:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:18:33.065 Initializing NVMe Controllers 00:18:33.065 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:33.065 Controller IO queue size 128, less than required. 00:18:33.065 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:33.065 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:18:33.065 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:18:33.065 Initialization complete. Launching workers. 00:18:33.065 ======================================================== 00:18:33.065 Latency(us) 00:18:33.065 Device Information : IOPS MiB/s Average min max 00:18:33.065 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001195.28 1000040.74 1003587.15 00:18:33.065 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1001760.55 1000044.09 1004978.67 00:18:33.065 ======================================================== 00:18:33.065 Total : 256.00 0.12 1001477.92 1000040.74 1004978.67 00:18:33.065 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2073716 00:18:33.637 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2073716) - No such process 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2073716 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:33.637 rmmod nvme_rdma 00:18:33.637 rmmod nvme_fabrics 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2072507 ']' 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2072507 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 2072507 ']' 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 2072507 00:18:33.637 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:18:33.638 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:33.638 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2072507 00:18:33.638 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:33.638 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:33.638 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2072507' 00:18:33.638 killing process with pid 2072507 00:18:33.638 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 2072507 00:18:33.638 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 2072507 00:18:33.899 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:33.899 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:33.899 00:18:33.899 real 0m20.747s 00:18:33.899 user 0m50.091s 00:18:33.899 sys 0m6.248s 00:18:33.899 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:33.899 13:46:26 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:18:33.899 ************************************ 00:18:33.899 END TEST nvmf_delete_subsystem 00:18:33.899 ************************************ 00:18:33.899 13:46:26 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:18:33.899 13:46:26 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:33.899 13:46:26 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:33.899 13:46:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:33.899 ************************************ 00:18:33.899 START TEST nvmf_ns_masking 00:18:33.899 ************************************ 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:18:33.899 * Looking for test storage... 00:18:33.899 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:18:33.899 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=2eb94775-79c2-4f93-b05b-9b5cb23e4421 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:18:33.900 13:46:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:18:42.048 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:18:42.048 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.048 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:18:42.049 Found net devices under 0000:98:00.0: mlx_0_0 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:18:42.049 Found net devices under 0000:98:00.1: mlx_0_1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:42.049 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:42.049 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:18:42.049 altname enp152s0f0np0 00:18:42.049 altname ens817f0np0 00:18:42.049 inet 192.168.100.8/24 scope global mlx_0_0 00:18:42.049 valid_lft forever preferred_lft forever 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:42.049 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:42.049 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:18:42.049 altname enp152s0f1np1 00:18:42.049 altname ens817f1np1 00:18:42.049 inet 192.168.100.9/24 scope global mlx_0_1 00:18:42.049 valid_lft forever preferred_lft forever 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:42.049 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:42.050 192.168.100.9' 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:42.050 192.168.100.9' 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:42.050 192.168.100.9' 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2078934 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2078934 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 2078934 ']' 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:42.050 13:46:33 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.050 [2024-06-11 13:46:33.870246] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:18:42.050 [2024-06-11 13:46:33.870314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.050 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.050 [2024-06-11 13:46:33.938480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.050 [2024-06-11 13:46:34.015903] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.050 [2024-06-11 13:46:34.015946] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.050 [2024-06-11 13:46:34.015954] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.050 [2024-06-11 13:46:34.015961] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.050 [2024-06-11 13:46:34.015966] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.050 [2024-06-11 13:46:34.016052] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.050 [2024-06-11 13:46:34.016150] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.050 [2024-06-11 13:46:34.016339] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.050 [2024-06-11 13:46:34.016340] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.050 13:46:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:42.050 13:46:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:18:42.050 13:46:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:42.050 13:46:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:42.050 13:46:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.050 13:46:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.050 13:46:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:42.050 [2024-06-11 13:46:34.861774] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1658e90/0x165d380) succeed. 00:18:42.050 [2024-06-11 13:46:34.875892] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x165a4d0/0x169ea10) succeed. 00:18:42.312 13:46:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:18:42.312 13:46:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:18:42.312 13:46:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:42.312 Malloc1 00:18:42.312 13:46:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:42.574 Malloc2 00:18:42.574 13:46:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:42.835 13:46:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:42.835 13:46:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:43.096 [2024-06-11 13:46:35.853949] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:43.096 13:46:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:18:43.096 13:46:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2eb94775-79c2-4f93-b05b-9b5cb23e4421 -a 192.168.100.8 -s 4420 -i 4 00:18:43.669 13:46:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:18:43.669 13:46:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:18:43.669 13:46:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.669 13:46:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:18:43.669 13:46:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:45.583 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:45.583 [ 0]:0x1 00:18:45.584 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:45.584 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:45.584 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8fed016844c246cc845cf2e7ec28ac27 00:18:45.584 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8fed016844c246cc845cf2e7ec28ac27 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:45.584 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:45.844 [ 0]:0x1 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8fed016844c246cc845cf2e7ec28ac27 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8fed016844c246cc845cf2e7ec28ac27 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:45.844 [ 1]:0x2 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1264824efa724ffe8b4160dbee0364b1 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1264824efa724ffe8b4160dbee0364b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:18:45.844 13:46:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:46.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.415 13:46:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:46.415 13:46:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:46.675 13:46:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:18:46.675 13:46:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2eb94775-79c2-4f93-b05b-9b5cb23e4421 -a 192.168.100.8 -s 4420 -i 4 00:18:47.245 13:46:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:47.245 13:46:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:18:47.245 13:46:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.245 13:46:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:18:47.245 13:46:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:18:47.245 13:46:39 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:49.154 13:46:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:49.154 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:49.155 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.155 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:49.155 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:49.155 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:49.155 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:49.155 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:18:49.155 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:49.155 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:49.155 [ 0]:0x2 00:18:49.155 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:49.155 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1264824efa724ffe8b4160dbee0364b1 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1264824efa724ffe8b4160dbee0364b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:49.414 [ 0]:0x1 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8fed016844c246cc845cf2e7ec28ac27 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8fed016844c246cc845cf2e7ec28ac27 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:49.414 [ 1]:0x2 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:49.414 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1264824efa724ffe8b4160dbee0364b1 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1264824efa724ffe8b4160dbee0364b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:49.674 13:46:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:49.934 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:18:49.934 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:49.934 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:49.934 [ 0]:0x2 00:18:49.934 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:49.934 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:49.934 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1264824efa724ffe8b4160dbee0364b1 00:18:49.934 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1264824efa724ffe8b4160dbee0364b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.934 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:18:49.934 13:46:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.195 13:46:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:50.456 13:46:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:18:50.456 13:46:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2eb94775-79c2-4f93-b05b-9b5cb23e4421 -a 192.168.100.8 -s 4420 -i 4 00:18:51.029 13:46:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:51.029 13:46:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:18:51.029 13:46:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.029 13:46:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:18:51.029 13:46:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:18:51.029 13:46:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:52.944 [ 0]:0x1 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8fed016844c246cc845cf2e7ec28ac27 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8fed016844c246cc845cf2e7ec28ac27 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:52.944 [ 1]:0x2 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1264824efa724ffe8b4160dbee0364b1 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1264824efa724ffe8b4160dbee0364b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:52.944 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:53.204 13:46:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:18:53.204 13:46:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:53.204 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:53.205 [ 0]:0x2 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1264824efa724ffe8b4160dbee0364b1 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1264824efa724ffe8b4160dbee0364b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:53.205 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:53.502 [2024-06-11 13:46:46.259430] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:53.502 request: 00:18:53.502 { 00:18:53.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.502 "nsid": 2, 00:18:53.502 "host": "nqn.2016-06.io.spdk:host1", 00:18:53.502 "method": "nvmf_ns_remove_host", 00:18:53.502 "req_id": 1 00:18:53.502 } 00:18:53.502 Got JSON-RPC error response 00:18:53.502 response: 00:18:53.502 { 00:18:53.502 "code": -32602, 00:18:53.502 "message": "Invalid parameters" 00:18:53.502 } 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:18:53.502 [ 0]:0x2 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:53.502 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:53.794 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1264824efa724ffe8b4160dbee0364b1 00:18:53.794 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1264824efa724ffe8b4160dbee0364b1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.795 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:18:53.795 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:54.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.055 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.315 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:54.315 13:46:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:18:54.315 13:46:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:54.315 13:46:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:18:54.315 13:46:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:54.315 13:46:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:54.315 13:46:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:18:54.315 13:46:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:54.315 13:46:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:54.315 rmmod nvme_rdma 00:18:54.315 rmmod nvme_fabrics 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2078934 ']' 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2078934 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 2078934 ']' 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 2078934 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2078934 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2078934' 00:18:54.315 killing process with pid 2078934 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 2078934 00:18:54.315 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 2078934 00:18:54.576 13:46:47 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:54.576 13:46:47 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:54.576 00:18:54.576 real 0m20.689s 00:18:54.576 user 0m57.923s 00:18:54.576 sys 0m6.444s 00:18:54.576 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:54.576 13:46:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:54.576 ************************************ 00:18:54.576 END TEST nvmf_ns_masking 00:18:54.576 ************************************ 00:18:54.576 13:46:47 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:18:54.576 13:46:47 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:18:54.576 13:46:47 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:54.576 13:46:47 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:54.576 13:46:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:54.576 ************************************ 00:18:54.576 START TEST nvmf_nvme_cli 00:18:54.576 ************************************ 00:18:54.576 13:46:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:18:54.576 * Looking for test storage... 00:18:54.576 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:54.576 13:46:47 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.837 13:46:47 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:18:54.838 13:46:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.428 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:01.429 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:01.429 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:01.429 Found net devices under 0000:98:00.0: mlx_0_0 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:01.429 Found net devices under 0000:98:00.1: mlx_0_1 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:01.429 13:46:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:01.429 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:01.429 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:19:01.429 altname enp152s0f0np0 00:19:01.429 altname ens817f0np0 00:19:01.429 inet 192.168.100.8/24 scope global mlx_0_0 00:19:01.429 valid_lft forever preferred_lft forever 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:01.429 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:01.429 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:19:01.429 altname enp152s0f1np1 00:19:01.429 altname ens817f1np1 00:19:01.429 inet 192.168.100.9/24 scope global mlx_0_1 00:19:01.429 valid_lft forever preferred_lft forever 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:01.429 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:01.430 192.168.100.9' 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:01.430 192.168.100.9' 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:01.430 192.168.100.9' 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2085347 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2085347 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 2085347 ']' 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:01.430 13:46:54 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:01.430 [2024-06-11 13:46:54.266814] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:19:01.430 [2024-06-11 13:46:54.266880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.430 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.430 [2024-06-11 13:46:54.334565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.691 [2024-06-11 13:46:54.408547] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.691 [2024-06-11 13:46:54.408588] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.691 [2024-06-11 13:46:54.408596] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.691 [2024-06-11 13:46:54.408602] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.691 [2024-06-11 13:46:54.408608] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.691 [2024-06-11 13:46:54.408752] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.691 [2024-06-11 13:46:54.408880] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.691 [2024-06-11 13:46:54.409051] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.691 [2024-06-11 13:46:54.409051] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.264 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:02.264 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:19:02.264 13:46:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:02.264 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:02.264 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:02.264 13:46:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.264 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:02.264 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.264 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:02.264 [2024-06-11 13:46:55.122210] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17cde90/0x17d2380) succeed. 00:19:02.264 [2024-06-11 13:46:55.136768] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17cf4d0/0x1813a10) succeed. 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:02.525 Malloc0 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:02.525 Malloc1 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:02.525 [2024-06-11 13:46:55.339291] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.525 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 4420 00:19:02.786 00:19:02.786 Discovery Log Number of Records 2, Generation counter 2 00:19:02.786 =====Discovery Log Entry 0====== 00:19:02.786 trtype: rdma 00:19:02.786 adrfam: ipv4 00:19:02.786 subtype: current discovery subsystem 00:19:02.786 treq: not required 00:19:02.786 portid: 0 00:19:02.786 trsvcid: 4420 00:19:02.786 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:02.786 traddr: 192.168.100.8 00:19:02.786 eflags: explicit discovery connections, duplicate discovery information 00:19:02.786 rdma_prtype: not specified 00:19:02.786 rdma_qptype: connected 00:19:02.786 rdma_cms: rdma-cm 00:19:02.787 rdma_pkey: 0x0000 00:19:02.787 =====Discovery Log Entry 1====== 00:19:02.787 trtype: rdma 00:19:02.787 adrfam: ipv4 00:19:02.787 subtype: nvme subsystem 00:19:02.787 treq: not required 00:19:02.787 portid: 0 00:19:02.787 trsvcid: 4420 00:19:02.787 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:02.787 traddr: 192.168.100.8 00:19:02.787 eflags: none 00:19:02.787 rdma_prtype: not specified 00:19:02.787 rdma_qptype: connected 00:19:02.787 rdma_cms: rdma-cm 00:19:02.787 rdma_pkey: 0x0000 00:19:02.787 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:02.787 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:02.787 13:46:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:19:02.787 13:46:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:02.787 13:46:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:19:02.787 13:46:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:19:02.787 13:46:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:02.787 13:46:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:19:02.787 13:46:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:02.787 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:02.787 13:46:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:04.171 13:46:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:04.171 13:46:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:19:04.171 13:46:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.171 13:46:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:19:04.171 13:46:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:19:04.171 13:46:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:19:06.087 13:46:58 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:19:06.087 13:46:58 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:06.087 13:46:58 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:19:06.087 13:46:58 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:19:06.087 13:46:58 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.087 13:46:58 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:19:06.087 13:46:58 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:06.087 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:19:06.087 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:19:06.088 /dev/nvme0n1 ]] 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:06.088 13:46:58 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:07.474 rmmod nvme_rdma 00:19:07.474 rmmod nvme_fabrics 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2085347 ']' 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2085347 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 2085347 ']' 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 2085347 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2085347 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2085347' 00:19:07.474 killing process with pid 2085347 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 2085347 00:19:07.474 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 2085347 00:19:07.735 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:07.735 13:47:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:07.735 00:19:07.735 real 0m13.184s 00:19:07.735 user 0m26.357s 00:19:07.735 sys 0m5.571s 00:19:07.735 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:07.735 13:47:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:07.735 ************************************ 00:19:07.735 END TEST nvmf_nvme_cli 00:19:07.735 ************************************ 00:19:07.735 13:47:00 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:19:07.735 13:47:00 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:19:07.735 13:47:00 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:07.735 13:47:00 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:07.735 13:47:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:07.735 ************************************ 00:19:07.735 START TEST nvmf_host_management 00:19:07.735 ************************************ 00:19:07.735 13:47:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:19:07.997 * Looking for test storage... 00:19:07.997 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:19:07.997 13:47:00 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.143 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:16.144 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:16.144 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:16.144 Found net devices under 0000:98:00.0: mlx_0_0 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:16.144 Found net devices under 0000:98:00.1: mlx_0_1 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.144 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:16.145 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.145 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:19:16.145 altname enp152s0f0np0 00:19:16.145 altname ens817f0np0 00:19:16.145 inet 192.168.100.8/24 scope global mlx_0_0 00:19:16.145 valid_lft forever preferred_lft forever 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:16.145 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:16.145 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:19:16.145 altname enp152s0f1np1 00:19:16.145 altname ens817f1np1 00:19:16.145 inet 192.168.100.9/24 scope global mlx_0_1 00:19:16.145 valid_lft forever preferred_lft forever 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:16.145 192.168.100.9' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:16.145 192.168.100.9' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:16.145 192.168.100.9' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:16.145 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2090442 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2090442 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2090442 ']' 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:16.146 13:47:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:16.146 [2024-06-11 13:47:08.019501] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:19:16.146 [2024-06-11 13:47:08.019554] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.146 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.146 [2024-06-11 13:47:08.098213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.146 [2024-06-11 13:47:08.185708] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.146 [2024-06-11 13:47:08.185770] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.146 [2024-06-11 13:47:08.185779] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.146 [2024-06-11 13:47:08.185786] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.146 [2024-06-11 13:47:08.185792] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.146 [2024-06-11 13:47:08.185927] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.146 [2024-06-11 13:47:08.186094] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.146 [2024-06-11 13:47:08.186245] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.146 [2024-06-11 13:47:08.186245] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:19:16.146 13:47:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:16.146 13:47:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:19:16.146 13:47:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.146 13:47:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:16.146 13:47:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:16.146 13:47:08 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.146 13:47:08 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:16.146 13:47:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.146 13:47:08 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:16.146 [2024-06-11 13:47:08.875448] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20620d0/0x20665c0) succeed. 00:19:16.146 [2024-06-11 13:47:08.888550] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2063710/0x20a7c50) succeed. 00:19:16.146 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.146 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:19:16.146 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:16.146 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:16.146 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:16.146 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:19:16.146 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:19:16.146 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.146 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:16.146 Malloc0 00:19:16.407 [2024-06-11 13:47:09.064786] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2090813 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2090813 /var/tmp/bdevperf.sock 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2090813 ']' 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.407 { 00:19:16.407 "params": { 00:19:16.407 "name": "Nvme$subsystem", 00:19:16.407 "trtype": "$TEST_TRANSPORT", 00:19:16.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.407 "adrfam": "ipv4", 00:19:16.407 "trsvcid": "$NVMF_PORT", 00:19:16.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.407 "hdgst": ${hdgst:-false}, 00:19:16.407 "ddgst": ${ddgst:-false} 00:19:16.407 }, 00:19:16.407 "method": "bdev_nvme_attach_controller" 00:19:16.407 } 00:19:16.407 EOF 00:19:16.407 )") 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:19:16.407 13:47:09 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:16.407 "params": { 00:19:16.407 "name": "Nvme0", 00:19:16.407 "trtype": "rdma", 00:19:16.407 "traddr": "192.168.100.8", 00:19:16.407 "adrfam": "ipv4", 00:19:16.407 "trsvcid": "4420", 00:19:16.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:16.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:16.407 "hdgst": false, 00:19:16.407 "ddgst": false 00:19:16.407 }, 00:19:16.407 "method": "bdev_nvme_attach_controller" 00:19:16.407 }' 00:19:16.407 [2024-06-11 13:47:09.161288] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:19:16.407 [2024-06-11 13:47:09.161341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090813 ] 00:19:16.407 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.407 [2024-06-11 13:47:09.221457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.407 [2024-06-11 13:47:09.286051] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.668 Running I/O for 10 seconds... 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.239 13:47:09 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1264 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1264 -ge 100 ']' 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.239 13:47:10 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:19:18.181 [2024-06-11 13:47:11.034728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.034989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.034999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:19:18.181 [2024-06-11 13:47:11.035006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:19:18.181 [2024-06-11 13:47:11.035244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:19:18.181 [2024-06-11 13:47:11.035260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:19:18.181 [2024-06-11 13:47:11.035276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-06-11 13:47:11.035285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182700 00:19:18.182 [2024-06-11 13:47:11.035491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182600 00:19:18.182 [2024-06-11 13:47:11.035508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182600 00:19:18.182 [2024-06-11 13:47:11.035524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134a6000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134c7000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134e8000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013509000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001352a000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001354b000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001356c000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b442000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b421000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b400000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7ff000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7de000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.035823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b7bd000 len:0x10000 key:0x182400 00:19:18.182 [2024-06-11 13:47:11.035830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.182 [2024-06-11 13:47:11.038148] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:19:18.182 [2024-06-11 13:47:11.039357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:18.182 task offset: 40960 on job bdev=Nvme0n1 fails 00:19:18.182 00:19:18.182 Latency(us) 00:19:18.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.182 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:18.182 Job: Nvme0n1 ended in about 1.57 seconds with error 00:19:18.182 Verification LBA range: start 0x0 length 0x400 00:19:18.183 Nvme0n1 : 1.57 845.60 52.85 40.84 0.00 71434.45 2471.25 1020613.97 00:19:18.183 =================================================================================================================== 00:19:18.183 Total : 845.60 52.85 40.84 0.00 71434.45 2471.25 1020613.97 00:19:18.183 [2024-06-11 13:47:11.041429] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2090813 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.183 { 00:19:18.183 "params": { 00:19:18.183 "name": "Nvme$subsystem", 00:19:18.183 "trtype": "$TEST_TRANSPORT", 00:19:18.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.183 "adrfam": "ipv4", 00:19:18.183 "trsvcid": "$NVMF_PORT", 00:19:18.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.183 "hdgst": ${hdgst:-false}, 00:19:18.183 "ddgst": ${ddgst:-false} 00:19:18.183 }, 00:19:18.183 "method": "bdev_nvme_attach_controller" 00:19:18.183 } 00:19:18.183 EOF 00:19:18.183 )") 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:19:18.183 13:47:11 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:18.183 "params": { 00:19:18.183 "name": "Nvme0", 00:19:18.183 "trtype": "rdma", 00:19:18.183 "traddr": "192.168.100.8", 00:19:18.183 "adrfam": "ipv4", 00:19:18.183 "trsvcid": "4420", 00:19:18.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:18.183 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:18.183 "hdgst": false, 00:19:18.183 "ddgst": false 00:19:18.183 }, 00:19:18.183 "method": "bdev_nvme_attach_controller" 00:19:18.183 }' 00:19:18.442 [2024-06-11 13:47:11.098940] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:19:18.443 [2024-06-11 13:47:11.098991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091165 ] 00:19:18.443 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.443 [2024-06-11 13:47:11.158701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.443 [2024-06-11 13:47:11.222861] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.703 Running I/O for 1 seconds... 00:19:19.643 00:19:19.643 Latency(us) 00:19:19.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.643 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:19.643 Verification LBA range: start 0x0 length 0x400 00:19:19.643 Nvme0n1 : 1.01 2624.55 164.03 0.00 0.00 23833.67 723.63 43472.21 00:19:19.643 =================================================================================================================== 00:19:19.643 Total : 2624.55 164.03 0.00 0.00 23833.67 723.63 43472.21 00:19:19.905 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2090813 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:19.905 rmmod nvme_rdma 00:19:19.905 rmmod nvme_fabrics 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2090442 ']' 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2090442 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 2090442 ']' 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 2090442 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2090442 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2090442' 00:19:19.905 killing process with pid 2090442 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 2090442 00:19:19.905 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 2090442 00:19:20.166 [2024-06-11 13:47:12.861437] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:19:20.166 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:20.166 13:47:12 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:20.166 13:47:12 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:20.166 00:19:20.166 real 0m12.237s 00:19:20.166 user 0m24.281s 00:19:20.166 sys 0m6.119s 00:19:20.166 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:20.166 13:47:12 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:19:20.166 ************************************ 00:19:20.166 END TEST nvmf_host_management 00:19:20.166 ************************************ 00:19:20.166 13:47:12 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:19:20.166 13:47:12 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:20.166 13:47:12 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:20.166 13:47:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:20.166 ************************************ 00:19:20.166 START TEST nvmf_lvol 00:19:20.166 ************************************ 00:19:20.166 13:47:12 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:19:20.166 * Looking for test storage... 00:19:20.166 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:20.166 13:47:13 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.166 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:19:20.166 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.166 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.166 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.166 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.166 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:20.167 13:47:13 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:19:20.427 13:47:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:19:27.017 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:27.018 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:27.018 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:27.018 Found net devices under 0000:98:00.0: mlx_0_0 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:27.018 Found net devices under 0000:98:00.1: mlx_0_1 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:27.018 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:27.018 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:19:27.018 altname enp152s0f0np0 00:19:27.018 altname ens817f0np0 00:19:27.018 inet 192.168.100.8/24 scope global mlx_0_0 00:19:27.018 valid_lft forever preferred_lft forever 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:27.018 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:27.018 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:19:27.018 altname enp152s0f1np1 00:19:27.018 altname ens817f1np1 00:19:27.018 inet 192.168.100.9/24 scope global mlx_0_1 00:19:27.018 valid_lft forever preferred_lft forever 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:27.018 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:27.280 192.168.100.9' 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:27.280 192.168.100.9' 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:27.280 192.168.100.9' 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:27.280 13:47:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2095224 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2095224 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 2095224 ']' 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:27.280 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:27.280 [2024-06-11 13:47:20.058605] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:19:27.280 [2024-06-11 13:47:20.058676] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.280 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.280 [2024-06-11 13:47:20.126530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:27.541 [2024-06-11 13:47:20.202689] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.541 [2024-06-11 13:47:20.202733] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.541 [2024-06-11 13:47:20.202741] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.541 [2024-06-11 13:47:20.202747] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.541 [2024-06-11 13:47:20.202753] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.541 [2024-06-11 13:47:20.202891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.541 [2024-06-11 13:47:20.203014] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.541 [2024-06-11 13:47:20.203024] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.112 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:28.112 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:19:28.112 13:47:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.112 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:28.112 13:47:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:28.112 13:47:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.112 13:47:20 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:28.373 [2024-06-11 13:47:21.061437] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16ba270/0x16be760) succeed. 00:19:28.373 [2024-06-11 13:47:21.075496] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16bb810/0x16ffdf0) succeed. 00:19:28.373 13:47:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:28.635 13:47:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:19:28.635 13:47:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:28.895 13:47:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:19:28.895 13:47:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:19:28.895 13:47:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:19:29.156 13:47:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7e280cdd-8b97-4488-a57f-e938bc303573 00:19:29.156 13:47:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e280cdd-8b97-4488-a57f-e938bc303573 lvol 20 00:19:29.416 13:47:22 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2611602a-1c9c-423b-ba49-d7bc42311016 00:19:29.416 13:47:22 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:29.416 13:47:22 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2611602a-1c9c-423b-ba49-d7bc42311016 00:19:29.677 13:47:22 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:29.677 [2024-06-11 13:47:22.556672] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:29.677 13:47:22 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:29.938 13:47:22 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2095740 00:19:29.938 13:47:22 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:19:29.938 13:47:22 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:19:29.938 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.882 13:47:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2611602a-1c9c-423b-ba49-d7bc42311016 MY_SNAPSHOT 00:19:31.142 13:47:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=eff44442-3590-44dc-856d-1bc196dd3eef 00:19:31.142 13:47:23 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2611602a-1c9c-423b-ba49-d7bc42311016 30 00:19:31.431 13:47:24 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone eff44442-3590-44dc-856d-1bc196dd3eef MY_CLONE 00:19:31.431 13:47:24 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a48d1378-a654-4548-8378-d62fd796b7c2 00:19:31.431 13:47:24 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a48d1378-a654-4548-8378-d62fd796b7c2 00:19:31.715 13:47:24 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2095740 00:19:41.710 Initializing NVMe Controllers 00:19:41.710 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:19:41.710 Controller IO queue size 128, less than required. 00:19:41.710 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:41.710 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:19:41.710 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:19:41.710 Initialization complete. Launching workers. 00:19:41.710 ======================================================== 00:19:41.710 Latency(us) 00:19:41.710 Device Information : IOPS MiB/s Average min max 00:19:41.710 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 23116.30 90.30 5538.17 1839.11 38133.22 00:19:41.710 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 23080.10 90.16 5546.44 2624.58 33947.03 00:19:41.710 ======================================================== 00:19:41.710 Total : 46196.39 180.45 5542.30 1839.11 38133.22 00:19:41.710 00:19:41.710 13:47:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:41.710 13:47:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2611602a-1c9c-423b-ba49-d7bc42311016 00:19:41.710 13:47:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e280cdd-8b97-4488-a57f-e938bc303573 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:41.711 rmmod nvme_rdma 00:19:41.711 rmmod nvme_fabrics 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2095224 ']' 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2095224 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 2095224 ']' 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 2095224 00:19:41.711 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:19:41.972 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:41.972 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2095224 00:19:41.972 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:41.972 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:41.972 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2095224' 00:19:41.972 killing process with pid 2095224 00:19:41.972 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 2095224 00:19:41.972 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 2095224 00:19:42.233 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:42.233 13:47:34 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:42.233 00:19:42.233 real 0m21.936s 00:19:42.233 user 1m10.398s 00:19:42.233 sys 0m6.109s 00:19:42.233 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:42.233 13:47:34 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:19:42.233 ************************************ 00:19:42.233 END TEST nvmf_lvol 00:19:42.233 ************************************ 00:19:42.233 13:47:34 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:19:42.233 13:47:34 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:42.233 13:47:34 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:42.233 13:47:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:42.233 ************************************ 00:19:42.233 START TEST nvmf_lvs_grow 00:19:42.233 ************************************ 00:19:42.233 13:47:34 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:19:42.233 * Looking for test storage... 00:19:42.233 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:42.233 13:47:35 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.233 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:19:42.233 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.233 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:19:42.234 13:47:35 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:19:50.372 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:19:50.372 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:50.372 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:19:50.373 Found net devices under 0000:98:00.0: mlx_0_0 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:19:50.373 Found net devices under 0000:98:00.1: mlx_0_1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:50.373 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:50.373 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:19:50.373 altname enp152s0f0np0 00:19:50.373 altname ens817f0np0 00:19:50.373 inet 192.168.100.8/24 scope global mlx_0_0 00:19:50.373 valid_lft forever preferred_lft forever 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:50.373 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:50.373 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:19:50.373 altname enp152s0f1np1 00:19:50.373 altname ens817f1np1 00:19:50.373 inet 192.168.100.9/24 scope global mlx_0_1 00:19:50.373 valid_lft forever preferred_lft forever 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:50.373 192.168.100.9' 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:50.373 192.168.100.9' 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:50.373 192.168.100.9' 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:50.373 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2101987 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2101987 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 2101987 ']' 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:50.374 13:47:42 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:50.374 [2024-06-11 13:47:42.383732] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:19:50.374 [2024-06-11 13:47:42.383789] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.374 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.374 [2024-06-11 13:47:42.446685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.374 [2024-06-11 13:47:42.515855] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.374 [2024-06-11 13:47:42.515894] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.374 [2024-06-11 13:47:42.515902] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.374 [2024-06-11 13:47:42.515908] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.374 [2024-06-11 13:47:42.515914] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.374 [2024-06-11 13:47:42.515931] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.374 13:47:43 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:50.374 13:47:43 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:19:50.374 13:47:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.374 13:47:43 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:50.374 13:47:43 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:50.374 13:47:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.374 13:47:43 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:50.635 [2024-06-11 13:47:43.351287] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x105dc40/0x1062130) succeed. 00:19:50.635 [2024-06-11 13:47:43.363452] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x105f140/0x10a37c0) succeed. 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:50.635 ************************************ 00:19:50.635 START TEST lvs_grow_clean 00:19:50.635 ************************************ 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:50.635 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:50.896 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:50.896 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:51.157 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:19:51.157 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:19:51.157 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:51.157 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:51.157 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:51.157 13:47:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa lvol 150 00:19:51.417 13:47:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2529bebe-c8c2-4e22-b8db-a64b1efd8117 00:19:51.418 13:47:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:51.418 13:47:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:51.418 [2024-06-11 13:47:44.226442] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:51.418 [2024-06-11 13:47:44.226491] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:51.418 true 00:19:51.418 13:47:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:19:51.418 13:47:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:51.678 13:47:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:51.678 13:47:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:51.678 13:47:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2529bebe-c8c2-4e22-b8db-a64b1efd8117 00:19:51.938 13:47:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:51.938 [2024-06-11 13:47:44.836530] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:52.199 13:47:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:52.199 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2102397 00:19:52.199 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:52.199 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:52.199 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2102397 /var/tmp/bdevperf.sock 00:19:52.199 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 2102397 ']' 00:19:52.199 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.199 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:52.199 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.199 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:52.199 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:19:52.199 [2024-06-11 13:47:45.047999] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:19:52.199 [2024-06-11 13:47:45.048057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102397 ] 00:19:52.199 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.459 [2024-06-11 13:47:45.122603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.459 [2024-06-11 13:47:45.186997] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.031 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:53.031 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:19:53.031 13:47:45 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:53.292 Nvme0n1 00:19:53.292 13:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:53.553 [ 00:19:53.553 { 00:19:53.553 "name": "Nvme0n1", 00:19:53.553 "aliases": [ 00:19:53.553 "2529bebe-c8c2-4e22-b8db-a64b1efd8117" 00:19:53.553 ], 00:19:53.553 "product_name": "NVMe disk", 00:19:53.553 "block_size": 4096, 00:19:53.553 "num_blocks": 38912, 00:19:53.553 "uuid": "2529bebe-c8c2-4e22-b8db-a64b1efd8117", 00:19:53.553 "assigned_rate_limits": { 00:19:53.553 "rw_ios_per_sec": 0, 00:19:53.553 "rw_mbytes_per_sec": 0, 00:19:53.553 "r_mbytes_per_sec": 0, 00:19:53.553 "w_mbytes_per_sec": 0 00:19:53.553 }, 00:19:53.553 "claimed": false, 00:19:53.553 "zoned": false, 00:19:53.553 "supported_io_types": { 00:19:53.553 "read": true, 00:19:53.553 "write": true, 00:19:53.553 "unmap": true, 00:19:53.553 "write_zeroes": true, 00:19:53.553 "flush": true, 00:19:53.553 "reset": true, 00:19:53.553 "compare": true, 00:19:53.553 "compare_and_write": true, 00:19:53.553 "abort": true, 00:19:53.553 "nvme_admin": true, 00:19:53.553 "nvme_io": true 00:19:53.553 }, 00:19:53.553 "memory_domains": [ 00:19:53.553 { 00:19:53.553 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:53.553 "dma_device_type": 0 00:19:53.553 } 00:19:53.553 ], 00:19:53.553 "driver_specific": { 00:19:53.553 "nvme": [ 00:19:53.553 { 00:19:53.553 "trid": { 00:19:53.553 "trtype": "RDMA", 00:19:53.553 "adrfam": "IPv4", 00:19:53.553 "traddr": "192.168.100.8", 00:19:53.553 "trsvcid": "4420", 00:19:53.553 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:53.553 }, 00:19:53.553 "ctrlr_data": { 00:19:53.553 "cntlid": 1, 00:19:53.553 "vendor_id": "0x8086", 00:19:53.553 "model_number": "SPDK bdev Controller", 00:19:53.553 "serial_number": "SPDK0", 00:19:53.553 "firmware_revision": "24.09", 00:19:53.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:53.553 "oacs": { 00:19:53.553 "security": 0, 00:19:53.553 "format": 0, 00:19:53.553 "firmware": 0, 00:19:53.553 "ns_manage": 0 00:19:53.553 }, 00:19:53.553 "multi_ctrlr": true, 00:19:53.553 "ana_reporting": false 00:19:53.553 }, 00:19:53.553 "vs": { 00:19:53.553 "nvme_version": "1.3" 00:19:53.553 }, 00:19:53.553 "ns_data": { 00:19:53.553 "id": 1, 00:19:53.553 "can_share": true 00:19:53.553 } 00:19:53.553 } 00:19:53.553 ], 00:19:53.553 "mp_policy": "active_passive" 00:19:53.553 } 00:19:53.553 } 00:19:53.553 ] 00:19:53.553 13:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2102712 00:19:53.553 13:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:53.553 13:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:53.553 Running I/O for 10 seconds... 00:19:54.494 Latency(us) 00:19:54.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:54.495 Nvme0n1 : 1.00 26021.00 101.64 0.00 0.00 0.00 0.00 0.00 00:19:54.495 =================================================================================================================== 00:19:54.495 Total : 26021.00 101.64 0.00 0.00 0.00 0.00 0.00 00:19:54.495 00:19:55.435 13:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:19:55.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:55.435 Nvme0n1 : 2.00 26240.50 102.50 0.00 0.00 0.00 0.00 0.00 00:19:55.435 =================================================================================================================== 00:19:55.435 Total : 26240.50 102.50 0.00 0.00 0.00 0.00 0.00 00:19:55.435 00:19:55.695 true 00:19:55.695 13:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:19:55.695 13:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:55.695 13:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:55.695 13:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:55.695 13:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2102712 00:19:56.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:56.636 Nvme0n1 : 3.00 26316.00 102.80 0.00 0.00 0.00 0.00 0.00 00:19:56.636 =================================================================================================================== 00:19:56.636 Total : 26316.00 102.80 0.00 0.00 0.00 0.00 0.00 00:19:56.636 00:19:57.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:57.577 Nvme0n1 : 4.00 26368.00 103.00 0.00 0.00 0.00 0.00 0.00 00:19:57.577 =================================================================================================================== 00:19:57.577 Total : 26368.00 103.00 0.00 0.00 0.00 0.00 0.00 00:19:57.577 00:19:58.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:58.518 Nvme0n1 : 5.00 26413.40 103.18 0.00 0.00 0.00 0.00 0.00 00:19:58.518 =================================================================================================================== 00:19:58.518 Total : 26413.40 103.18 0.00 0.00 0.00 0.00 0.00 00:19:58.518 00:19:59.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:59.459 Nvme0n1 : 6.00 26447.83 103.31 0.00 0.00 0.00 0.00 0.00 00:19:59.459 =================================================================================================================== 00:19:59.459 Total : 26447.83 103.31 0.00 0.00 0.00 0.00 0.00 00:19:59.459 00:20:00.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:00.843 Nvme0n1 : 7.00 26477.57 103.43 0.00 0.00 0.00 0.00 0.00 00:20:00.843 =================================================================================================================== 00:20:00.843 Total : 26477.57 103.43 0.00 0.00 0.00 0.00 0.00 00:20:00.843 00:20:01.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:01.785 Nvme0n1 : 8.00 26496.12 103.50 0.00 0.00 0.00 0.00 0.00 00:20:01.785 =================================================================================================================== 00:20:01.785 Total : 26496.12 103.50 0.00 0.00 0.00 0.00 0.00 00:20:01.785 00:20:02.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:02.727 Nvme0n1 : 9.00 26514.11 103.57 0.00 0.00 0.00 0.00 0.00 00:20:02.727 =================================================================================================================== 00:20:02.727 Total : 26514.11 103.57 0.00 0.00 0.00 0.00 0.00 00:20:02.727 00:20:03.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:03.666 Nvme0n1 : 10.00 26528.00 103.62 0.00 0.00 0.00 0.00 0.00 00:20:03.667 =================================================================================================================== 00:20:03.667 Total : 26528.00 103.62 0.00 0.00 0.00 0.00 0.00 00:20:03.667 00:20:03.667 00:20:03.667 Latency(us) 00:20:03.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:03.667 Nvme0n1 : 10.00 26529.19 103.63 0.00 0.00 4821.45 3358.72 13544.11 00:20:03.667 =================================================================================================================== 00:20:03.667 Total : 26529.19 103.63 0.00 0.00 4821.45 3358.72 13544.11 00:20:03.667 0 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2102397 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 2102397 ']' 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 2102397 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2102397 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2102397' 00:20:03.667 killing process with pid 2102397 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 2102397 00:20:03.667 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.667 00:20:03.667 Latency(us) 00:20:03.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.667 =================================================================================================================== 00:20:03.667 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 2102397 00:20:03.667 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:03.927 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:04.187 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:20:04.187 13:47:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:20:04.187 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:20:04.187 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:20:04.187 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:04.448 [2024-06-11 13:47:57.239363] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:04.448 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:20:04.448 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:20:04.448 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:20:04.448 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:04.448 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:04.448 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:04.448 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:04.448 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:04.448 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:04.449 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:04.449 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:20:04.449 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:20:04.710 request: 00:20:04.710 { 00:20:04.710 "uuid": "8f1d0ac4-c07e-4b4a-a024-882e069547fa", 00:20:04.710 "method": "bdev_lvol_get_lvstores", 00:20:04.710 "req_id": 1 00:20:04.710 } 00:20:04.710 Got JSON-RPC error response 00:20:04.710 response: 00:20:04.710 { 00:20:04.710 "code": -19, 00:20:04.710 "message": "No such device" 00:20:04.710 } 00:20:04.710 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:20:04.710 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:04.710 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:04.710 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:04.710 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:04.710 aio_bdev 00:20:04.969 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2529bebe-c8c2-4e22-b8db-a64b1efd8117 00:20:04.969 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=2529bebe-c8c2-4e22-b8db-a64b1efd8117 00:20:04.969 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:04.970 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:20:04.970 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:04.970 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:04.970 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:04.970 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2529bebe-c8c2-4e22-b8db-a64b1efd8117 -t 2000 00:20:05.230 [ 00:20:05.230 { 00:20:05.230 "name": "2529bebe-c8c2-4e22-b8db-a64b1efd8117", 00:20:05.230 "aliases": [ 00:20:05.230 "lvs/lvol" 00:20:05.230 ], 00:20:05.230 "product_name": "Logical Volume", 00:20:05.230 "block_size": 4096, 00:20:05.230 "num_blocks": 38912, 00:20:05.230 "uuid": "2529bebe-c8c2-4e22-b8db-a64b1efd8117", 00:20:05.230 "assigned_rate_limits": { 00:20:05.230 "rw_ios_per_sec": 0, 00:20:05.230 "rw_mbytes_per_sec": 0, 00:20:05.230 "r_mbytes_per_sec": 0, 00:20:05.230 "w_mbytes_per_sec": 0 00:20:05.230 }, 00:20:05.230 "claimed": false, 00:20:05.230 "zoned": false, 00:20:05.230 "supported_io_types": { 00:20:05.230 "read": true, 00:20:05.230 "write": true, 00:20:05.230 "unmap": true, 00:20:05.230 "write_zeroes": true, 00:20:05.230 "flush": false, 00:20:05.230 "reset": true, 00:20:05.230 "compare": false, 00:20:05.230 "compare_and_write": false, 00:20:05.230 "abort": false, 00:20:05.230 "nvme_admin": false, 00:20:05.230 "nvme_io": false 00:20:05.230 }, 00:20:05.230 "driver_specific": { 00:20:05.230 "lvol": { 00:20:05.230 "lvol_store_uuid": "8f1d0ac4-c07e-4b4a-a024-882e069547fa", 00:20:05.230 "base_bdev": "aio_bdev", 00:20:05.230 "thin_provision": false, 00:20:05.230 "num_allocated_clusters": 38, 00:20:05.230 "snapshot": false, 00:20:05.230 "clone": false, 00:20:05.230 "esnap_clone": false 00:20:05.230 } 00:20:05.230 } 00:20:05.230 } 00:20:05.231 ] 00:20:05.231 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:20:05.231 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:20:05.231 13:47:57 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:20:05.231 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:20:05.231 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:20:05.231 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:20:05.492 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:20:05.492 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2529bebe-c8c2-4e22-b8db-a64b1efd8117 00:20:05.492 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f1d0ac4-c07e-4b4a-a024-882e069547fa 00:20:05.752 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:06.013 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:06.013 00:20:06.013 real 0m15.284s 00:20:06.013 user 0m15.269s 00:20:06.013 sys 0m0.991s 00:20:06.013 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:06.013 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:20:06.013 ************************************ 00:20:06.013 END TEST lvs_grow_clean 00:20:06.013 ************************************ 00:20:06.013 13:47:58 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:20:06.013 13:47:58 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:06.013 13:47:58 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:06.013 13:47:58 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:06.014 ************************************ 00:20:06.014 START TEST lvs_grow_dirty 00:20:06.014 ************************************ 00:20:06.014 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:20:06.014 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:06.014 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:06.014 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:06.014 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:06.014 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:06.014 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:06.014 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:06.014 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:06.014 13:47:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:06.274 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:06.274 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:06.274 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1da29340-e418-49b6-81b9-8cde35356e7c 00:20:06.535 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:06.535 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:06.535 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:06.535 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:06.535 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1da29340-e418-49b6-81b9-8cde35356e7c lvol 150 00:20:06.796 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=dad43b81-3c7d-459d-b64e-743462f63f7a 00:20:06.796 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:06.796 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:06.796 [2024-06-11 13:47:59.614533] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:06.796 [2024-06-11 13:47:59.614581] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:06.796 true 00:20:06.796 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:06.796 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:07.057 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:07.057 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:07.057 13:47:59 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dad43b81-3c7d-459d-b64e-743462f63f7a 00:20:07.317 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:07.578 [2024-06-11 13:48:00.232805] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:07.578 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:07.578 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:07.578 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2105580 00:20:07.578 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:07.578 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2105580 /var/tmp/bdevperf.sock 00:20:07.578 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2105580 ']' 00:20:07.578 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.578 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:07.578 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.578 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:07.578 13:48:00 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:07.578 [2024-06-11 13:48:00.426690] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:07.579 [2024-06-11 13:48:00.426740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105580 ] 00:20:07.579 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.840 [2024-06-11 13:48:00.504029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.840 [2024-06-11 13:48:00.569311] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.410 13:48:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:08.410 13:48:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:20:08.410 13:48:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:08.671 Nvme0n1 00:20:08.671 13:48:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:08.964 [ 00:20:08.964 { 00:20:08.964 "name": "Nvme0n1", 00:20:08.964 "aliases": [ 00:20:08.964 "dad43b81-3c7d-459d-b64e-743462f63f7a" 00:20:08.964 ], 00:20:08.964 "product_name": "NVMe disk", 00:20:08.964 "block_size": 4096, 00:20:08.964 "num_blocks": 38912, 00:20:08.964 "uuid": "dad43b81-3c7d-459d-b64e-743462f63f7a", 00:20:08.964 "assigned_rate_limits": { 00:20:08.964 "rw_ios_per_sec": 0, 00:20:08.964 "rw_mbytes_per_sec": 0, 00:20:08.964 "r_mbytes_per_sec": 0, 00:20:08.964 "w_mbytes_per_sec": 0 00:20:08.964 }, 00:20:08.964 "claimed": false, 00:20:08.964 "zoned": false, 00:20:08.964 "supported_io_types": { 00:20:08.964 "read": true, 00:20:08.964 "write": true, 00:20:08.964 "unmap": true, 00:20:08.964 "write_zeroes": true, 00:20:08.964 "flush": true, 00:20:08.964 "reset": true, 00:20:08.964 "compare": true, 00:20:08.964 "compare_and_write": true, 00:20:08.964 "abort": true, 00:20:08.964 "nvme_admin": true, 00:20:08.964 "nvme_io": true 00:20:08.964 }, 00:20:08.964 "memory_domains": [ 00:20:08.964 { 00:20:08.964 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:08.964 "dma_device_type": 0 00:20:08.964 } 00:20:08.964 ], 00:20:08.964 "driver_specific": { 00:20:08.964 "nvme": [ 00:20:08.964 { 00:20:08.964 "trid": { 00:20:08.964 "trtype": "RDMA", 00:20:08.964 "adrfam": "IPv4", 00:20:08.964 "traddr": "192.168.100.8", 00:20:08.964 "trsvcid": "4420", 00:20:08.964 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:08.964 }, 00:20:08.964 "ctrlr_data": { 00:20:08.964 "cntlid": 1, 00:20:08.964 "vendor_id": "0x8086", 00:20:08.964 "model_number": "SPDK bdev Controller", 00:20:08.964 "serial_number": "SPDK0", 00:20:08.964 "firmware_revision": "24.09", 00:20:08.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:08.964 "oacs": { 00:20:08.964 "security": 0, 00:20:08.964 "format": 0, 00:20:08.964 "firmware": 0, 00:20:08.964 "ns_manage": 0 00:20:08.964 }, 00:20:08.964 "multi_ctrlr": true, 00:20:08.964 "ana_reporting": false 00:20:08.964 }, 00:20:08.964 "vs": { 00:20:08.964 "nvme_version": "1.3" 00:20:08.964 }, 00:20:08.964 "ns_data": { 00:20:08.964 "id": 1, 00:20:08.964 "can_share": true 00:20:08.964 } 00:20:08.964 } 00:20:08.964 ], 00:20:08.964 "mp_policy": "active_passive" 00:20:08.964 } 00:20:08.964 } 00:20:08.964 ] 00:20:08.964 13:48:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2105874 00:20:08.964 13:48:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:08.964 13:48:01 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.964 Running I/O for 10 seconds... 00:20:09.922 Latency(us) 00:20:09.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:09.922 Nvme0n1 : 1.00 25920.00 101.25 0.00 0.00 0.00 0.00 0.00 00:20:09.922 =================================================================================================================== 00:20:09.922 Total : 25920.00 101.25 0.00 0.00 0.00 0.00 0.00 00:20:09.922 00:20:10.866 13:48:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:10.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:10.866 Nvme0n1 : 2.00 26178.50 102.26 0.00 0.00 0.00 0.00 0.00 00:20:10.866 =================================================================================================================== 00:20:10.866 Total : 26178.50 102.26 0.00 0.00 0.00 0.00 0.00 00:20:10.866 00:20:10.866 true 00:20:11.127 13:48:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:11.127 13:48:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:11.127 13:48:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:11.127 13:48:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:11.127 13:48:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2105874 00:20:12.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:12.071 Nvme0n1 : 3.00 26273.00 102.63 0.00 0.00 0.00 0.00 0.00 00:20:12.071 =================================================================================================================== 00:20:12.071 Total : 26273.00 102.63 0.00 0.00 0.00 0.00 0.00 00:20:12.071 00:20:13.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:13.013 Nvme0n1 : 4.00 26352.25 102.94 0.00 0.00 0.00 0.00 0.00 00:20:13.013 =================================================================================================================== 00:20:13.013 Total : 26352.25 102.94 0.00 0.00 0.00 0.00 0.00 00:20:13.013 00:20:13.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:13.952 Nvme0n1 : 5.00 26400.40 103.13 0.00 0.00 0.00 0.00 0.00 00:20:13.952 =================================================================================================================== 00:20:13.952 Total : 26400.40 103.13 0.00 0.00 0.00 0.00 0.00 00:20:13.952 00:20:14.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:14.891 Nvme0n1 : 6.00 26437.83 103.27 0.00 0.00 0.00 0.00 0.00 00:20:14.891 =================================================================================================================== 00:20:14.891 Total : 26437.83 103.27 0.00 0.00 0.00 0.00 0.00 00:20:14.891 00:20:15.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:15.832 Nvme0n1 : 7.00 26468.43 103.39 0.00 0.00 0.00 0.00 0.00 00:20:15.832 =================================================================================================================== 00:20:15.832 Total : 26468.43 103.39 0.00 0.00 0.00 0.00 0.00 00:20:15.832 00:20:17.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:17.215 Nvme0n1 : 8.00 26484.00 103.45 0.00 0.00 0.00 0.00 0.00 00:20:17.215 =================================================================================================================== 00:20:17.215 Total : 26484.00 103.45 0.00 0.00 0.00 0.00 0.00 00:20:17.215 00:20:18.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:18.154 Nvme0n1 : 9.00 26478.22 103.43 0.00 0.00 0.00 0.00 0.00 00:20:18.154 =================================================================================================================== 00:20:18.154 Total : 26478.22 103.43 0.00 0.00 0.00 0.00 0.00 00:20:18.154 00:20:19.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:19.094 Nvme0n1 : 10.00 26492.90 103.49 0.00 0.00 0.00 0.00 0.00 00:20:19.094 =================================================================================================================== 00:20:19.094 Total : 26492.90 103.49 0.00 0.00 0.00 0.00 0.00 00:20:19.094 00:20:19.094 00:20:19.094 Latency(us) 00:20:19.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:19.095 Nvme0n1 : 10.00 26495.13 103.50 0.00 0.00 4827.48 3413.33 15947.09 00:20:19.095 =================================================================================================================== 00:20:19.095 Total : 26495.13 103.50 0.00 0.00 4827.48 3413.33 15947.09 00:20:19.095 0 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2105580 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 2105580 ']' 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 2105580 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2105580 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2105580' 00:20:19.095 killing process with pid 2105580 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 2105580 00:20:19.095 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.095 00:20:19.095 Latency(us) 00:20:19.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.095 =================================================================================================================== 00:20:19.095 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 2105580 00:20:19.095 13:48:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:19.354 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2101987 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2101987 00:20:19.614 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2101987 Killed "${NVMF_APP[@]}" "$@" 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2108464 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2108464 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2108464 ']' 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:19.614 13:48:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:19.874 [2024-06-11 13:48:12.537266] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:19.874 [2024-06-11 13:48:12.537318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.874 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.874 [2024-06-11 13:48:12.599085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.874 [2024-06-11 13:48:12.663731] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.874 [2024-06-11 13:48:12.663767] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.874 [2024-06-11 13:48:12.663775] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.874 [2024-06-11 13:48:12.663781] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.874 [2024-06-11 13:48:12.663787] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.874 [2024-06-11 13:48:12.663804] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.444 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:20.444 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:20:20.444 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:20.444 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:20.444 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:20.444 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.444 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:20.704 [2024-06-11 13:48:13.472745] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:20.704 [2024-06-11 13:48:13.472838] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:20.704 [2024-06-11 13:48:13.472869] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:20.704 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:20:20.704 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dad43b81-3c7d-459d-b64e-743462f63f7a 00:20:20.704 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=dad43b81-3c7d-459d-b64e-743462f63f7a 00:20:20.704 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:20.704 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:20:20.704 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:20.704 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:20.704 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:20.964 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dad43b81-3c7d-459d-b64e-743462f63f7a -t 2000 00:20:20.964 [ 00:20:20.964 { 00:20:20.964 "name": "dad43b81-3c7d-459d-b64e-743462f63f7a", 00:20:20.964 "aliases": [ 00:20:20.964 "lvs/lvol" 00:20:20.964 ], 00:20:20.964 "product_name": "Logical Volume", 00:20:20.964 "block_size": 4096, 00:20:20.964 "num_blocks": 38912, 00:20:20.964 "uuid": "dad43b81-3c7d-459d-b64e-743462f63f7a", 00:20:20.964 "assigned_rate_limits": { 00:20:20.964 "rw_ios_per_sec": 0, 00:20:20.964 "rw_mbytes_per_sec": 0, 00:20:20.964 "r_mbytes_per_sec": 0, 00:20:20.964 "w_mbytes_per_sec": 0 00:20:20.964 }, 00:20:20.964 "claimed": false, 00:20:20.964 "zoned": false, 00:20:20.964 "supported_io_types": { 00:20:20.964 "read": true, 00:20:20.964 "write": true, 00:20:20.964 "unmap": true, 00:20:20.964 "write_zeroes": true, 00:20:20.964 "flush": false, 00:20:20.964 "reset": true, 00:20:20.964 "compare": false, 00:20:20.964 "compare_and_write": false, 00:20:20.964 "abort": false, 00:20:20.964 "nvme_admin": false, 00:20:20.964 "nvme_io": false 00:20:20.964 }, 00:20:20.964 "driver_specific": { 00:20:20.964 "lvol": { 00:20:20.964 "lvol_store_uuid": "1da29340-e418-49b6-81b9-8cde35356e7c", 00:20:20.964 "base_bdev": "aio_bdev", 00:20:20.964 "thin_provision": false, 00:20:20.964 "num_allocated_clusters": 38, 00:20:20.964 "snapshot": false, 00:20:20.964 "clone": false, 00:20:20.964 "esnap_clone": false 00:20:20.964 } 00:20:20.964 } 00:20:20.964 } 00:20:20.964 ] 00:20:20.964 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:20:20.964 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:20.965 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:20:21.225 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:20:21.226 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:20:21.226 13:48:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:21.226 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:20:21.226 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:21.486 [2024-06-11 13:48:14.268742] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:20:21.486 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:21.747 request: 00:20:21.747 { 00:20:21.747 "uuid": "1da29340-e418-49b6-81b9-8cde35356e7c", 00:20:21.747 "method": "bdev_lvol_get_lvstores", 00:20:21.747 "req_id": 1 00:20:21.747 } 00:20:21.747 Got JSON-RPC error response 00:20:21.747 response: 00:20:21.747 { 00:20:21.747 "code": -19, 00:20:21.747 "message": "No such device" 00:20:21.747 } 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:21.747 aio_bdev 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dad43b81-3c7d-459d-b64e-743462f63f7a 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=dad43b81-3c7d-459d-b64e-743462f63f7a 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:20:21.747 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:22.008 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dad43b81-3c7d-459d-b64e-743462f63f7a -t 2000 00:20:22.269 [ 00:20:22.269 { 00:20:22.269 "name": "dad43b81-3c7d-459d-b64e-743462f63f7a", 00:20:22.269 "aliases": [ 00:20:22.269 "lvs/lvol" 00:20:22.269 ], 00:20:22.269 "product_name": "Logical Volume", 00:20:22.269 "block_size": 4096, 00:20:22.269 "num_blocks": 38912, 00:20:22.269 "uuid": "dad43b81-3c7d-459d-b64e-743462f63f7a", 00:20:22.269 "assigned_rate_limits": { 00:20:22.269 "rw_ios_per_sec": 0, 00:20:22.269 "rw_mbytes_per_sec": 0, 00:20:22.269 "r_mbytes_per_sec": 0, 00:20:22.269 "w_mbytes_per_sec": 0 00:20:22.269 }, 00:20:22.269 "claimed": false, 00:20:22.269 "zoned": false, 00:20:22.269 "supported_io_types": { 00:20:22.269 "read": true, 00:20:22.269 "write": true, 00:20:22.269 "unmap": true, 00:20:22.269 "write_zeroes": true, 00:20:22.269 "flush": false, 00:20:22.269 "reset": true, 00:20:22.269 "compare": false, 00:20:22.269 "compare_and_write": false, 00:20:22.269 "abort": false, 00:20:22.269 "nvme_admin": false, 00:20:22.269 "nvme_io": false 00:20:22.269 }, 00:20:22.269 "driver_specific": { 00:20:22.269 "lvol": { 00:20:22.269 "lvol_store_uuid": "1da29340-e418-49b6-81b9-8cde35356e7c", 00:20:22.269 "base_bdev": "aio_bdev", 00:20:22.269 "thin_provision": false, 00:20:22.269 "num_allocated_clusters": 38, 00:20:22.269 "snapshot": false, 00:20:22.269 "clone": false, 00:20:22.269 "esnap_clone": false 00:20:22.269 } 00:20:22.269 } 00:20:22.269 } 00:20:22.269 ] 00:20:22.269 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:20:22.269 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:20:22.269 13:48:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:22.269 13:48:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:20:22.269 13:48:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:22.269 13:48:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:20:22.530 13:48:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:20:22.530 13:48:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dad43b81-3c7d-459d-b64e-743462f63f7a 00:20:22.790 13:48:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1da29340-e418-49b6-81b9-8cde35356e7c 00:20:22.790 13:48:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:20:23.051 00:20:23.051 real 0m16.980s 00:20:23.051 user 0m44.755s 00:20:23.051 sys 0m2.367s 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:23.051 ************************************ 00:20:23.051 END TEST lvs_grow_dirty 00:20:23.051 ************************************ 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:23.051 nvmf_trace.0 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:23.051 rmmod nvme_rdma 00:20:23.051 rmmod nvme_fabrics 00:20:23.051 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.313 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:20:23.313 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:20:23.313 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2108464 ']' 00:20:23.313 13:48:15 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2108464 00:20:23.313 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 2108464 ']' 00:20:23.313 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 2108464 00:20:23.313 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:20:23.313 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:23.313 13:48:15 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2108464 00:20:23.313 13:48:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:23.313 13:48:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:23.313 13:48:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2108464' 00:20:23.313 killing process with pid 2108464 00:20:23.313 13:48:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 2108464 00:20:23.313 13:48:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 2108464 00:20:23.313 13:48:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:23.313 13:48:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:23.313 00:20:23.313 real 0m41.176s 00:20:23.313 user 1m6.102s 00:20:23.313 sys 0m9.144s 00:20:23.313 13:48:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:23.313 13:48:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:23.313 ************************************ 00:20:23.313 END TEST nvmf_lvs_grow 00:20:23.313 ************************************ 00:20:23.313 13:48:16 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:20:23.313 13:48:16 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:23.313 13:48:16 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:23.313 13:48:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:23.574 ************************************ 00:20:23.574 START TEST nvmf_bdev_io_wait 00:20:23.574 ************************************ 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:20:23.574 * Looking for test storage... 00:20:23.574 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.574 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.575 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.575 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:23.575 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:23.575 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:20:23.575 13:48:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:20:31.720 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:20:31.720 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:20:31.720 Found net devices under 0000:98:00.0: mlx_0_0 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:20:31.720 Found net devices under 0000:98:00.1: mlx_0_1 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:31.720 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:31.721 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:31.721 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:20:31.721 altname enp152s0f0np0 00:20:31.721 altname ens817f0np0 00:20:31.721 inet 192.168.100.8/24 scope global mlx_0_0 00:20:31.721 valid_lft forever preferred_lft forever 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:31.721 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:31.721 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:20:31.721 altname enp152s0f1np1 00:20:31.721 altname ens817f1np1 00:20:31.721 inet 192.168.100.9/24 scope global mlx_0_1 00:20:31.721 valid_lft forever preferred_lft forever 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:31.721 192.168.100.9' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:31.721 192.168.100.9' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:31.721 192.168.100.9' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2113091 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2113091 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 2113091 ']' 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:31.721 13:48:23 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:31.721 [2024-06-11 13:48:23.475965] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:31.721 [2024-06-11 13:48:23.476044] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.721 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.721 [2024-06-11 13:48:23.542327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:31.721 [2024-06-11 13:48:23.617899] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.721 [2024-06-11 13:48:23.617938] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.721 [2024-06-11 13:48:23.617946] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.722 [2024-06-11 13:48:23.617953] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.722 [2024-06-11 13:48:23.617958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.722 [2024-06-11 13:48:23.618112] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.722 [2024-06-11 13:48:23.618326] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.722 [2024-06-11 13:48:23.618486] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.722 [2024-06-11 13:48:23.618486] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:31.722 [2024-06-11 13:48:24.391721] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22bcdf0/0x22c12e0) succeed. 00:20:31.722 [2024-06-11 13:48:24.404516] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22be430/0x2302970) succeed. 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:31.722 Malloc0 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:31.722 [2024-06-11 13:48:24.584040] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2113177 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2113179 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.722 { 00:20:31.722 "params": { 00:20:31.722 "name": "Nvme$subsystem", 00:20:31.722 "trtype": "$TEST_TRANSPORT", 00:20:31.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.722 "adrfam": "ipv4", 00:20:31.722 "trsvcid": "$NVMF_PORT", 00:20:31.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.722 "hdgst": ${hdgst:-false}, 00:20:31.722 "ddgst": ${ddgst:-false} 00:20:31.722 }, 00:20:31.722 "method": "bdev_nvme_attach_controller" 00:20:31.722 } 00:20:31.722 EOF 00:20:31.722 )") 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2113182 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2113185 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.722 { 00:20:31.722 "params": { 00:20:31.722 "name": "Nvme$subsystem", 00:20:31.722 "trtype": "$TEST_TRANSPORT", 00:20:31.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.722 "adrfam": "ipv4", 00:20:31.722 "trsvcid": "$NVMF_PORT", 00:20:31.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.722 "hdgst": ${hdgst:-false}, 00:20:31.722 "ddgst": ${ddgst:-false} 00:20:31.722 }, 00:20:31.722 "method": "bdev_nvme_attach_controller" 00:20:31.722 } 00:20:31.722 EOF 00:20:31.722 )") 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.722 { 00:20:31.722 "params": { 00:20:31.722 "name": "Nvme$subsystem", 00:20:31.722 "trtype": "$TEST_TRANSPORT", 00:20:31.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.722 "adrfam": "ipv4", 00:20:31.722 "trsvcid": "$NVMF_PORT", 00:20:31.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.722 "hdgst": ${hdgst:-false}, 00:20:31.722 "ddgst": ${ddgst:-false} 00:20:31.722 }, 00:20:31.722 "method": "bdev_nvme_attach_controller" 00:20:31.722 } 00:20:31.722 EOF 00:20:31.722 )") 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:31.722 { 00:20:31.722 "params": { 00:20:31.722 "name": "Nvme$subsystem", 00:20:31.722 "trtype": "$TEST_TRANSPORT", 00:20:31.722 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.722 "adrfam": "ipv4", 00:20:31.722 "trsvcid": "$NVMF_PORT", 00:20:31.722 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.722 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.722 "hdgst": ${hdgst:-false}, 00:20:31.722 "ddgst": ${ddgst:-false} 00:20:31.722 }, 00:20:31.722 "method": "bdev_nvme_attach_controller" 00:20:31.722 } 00:20:31.722 EOF 00:20:31.722 )") 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2113177 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:31.722 "params": { 00:20:31.722 "name": "Nvme1", 00:20:31.722 "trtype": "rdma", 00:20:31.722 "traddr": "192.168.100.8", 00:20:31.722 "adrfam": "ipv4", 00:20:31.722 "trsvcid": "4420", 00:20:31.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.722 "hdgst": false, 00:20:31.722 "ddgst": false 00:20:31.722 }, 00:20:31.722 "method": "bdev_nvme_attach_controller" 00:20:31.722 }' 00:20:31.722 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:20:31.723 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:31.723 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:31.723 "params": { 00:20:31.723 "name": "Nvme1", 00:20:31.723 "trtype": "rdma", 00:20:31.723 "traddr": "192.168.100.8", 00:20:31.723 "adrfam": "ipv4", 00:20:31.723 "trsvcid": "4420", 00:20:31.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.723 "hdgst": false, 00:20:31.723 "ddgst": false 00:20:31.723 }, 00:20:31.723 "method": "bdev_nvme_attach_controller" 00:20:31.723 }' 00:20:31.723 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:31.723 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:31.723 "params": { 00:20:31.723 "name": "Nvme1", 00:20:31.723 "trtype": "rdma", 00:20:31.723 "traddr": "192.168.100.8", 00:20:31.723 "adrfam": "ipv4", 00:20:31.723 "trsvcid": "4420", 00:20:31.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.723 "hdgst": false, 00:20:31.723 "ddgst": false 00:20:31.723 }, 00:20:31.723 "method": "bdev_nvme_attach_controller" 00:20:31.723 }' 00:20:31.723 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:20:31.723 13:48:24 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:31.723 "params": { 00:20:31.723 "name": "Nvme1", 00:20:31.723 "trtype": "rdma", 00:20:31.723 "traddr": "192.168.100.8", 00:20:31.723 "adrfam": "ipv4", 00:20:31.723 "trsvcid": "4420", 00:20:31.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.723 "hdgst": false, 00:20:31.723 "ddgst": false 00:20:31.723 }, 00:20:31.723 "method": "bdev_nvme_attach_controller" 00:20:31.723 }' 00:20:31.983 [2024-06-11 13:48:24.635180] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:31.983 [2024-06-11 13:48:24.635231] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:31.983 [2024-06-11 13:48:24.637924] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:31.983 [2024-06-11 13:48:24.637971] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:20:31.983 [2024-06-11 13:48:24.638545] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:31.983 [2024-06-11 13:48:24.638587] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:20:31.983 [2024-06-11 13:48:24.639194] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:31.983 [2024-06-11 13:48:24.639236] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:20:31.983 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.983 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.983 [2024-06-11 13:48:24.782091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.983 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.983 [2024-06-11 13:48:24.826804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.983 [2024-06-11 13:48:24.833467] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:20:31.983 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.983 [2024-06-11 13:48:24.876636] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:20:31.983 [2024-06-11 13:48:24.887282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.243 [2024-06-11 13:48:24.918236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.243 [2024-06-11 13:48:24.939341] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:20:32.243 [2024-06-11 13:48:24.969351] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:20:32.243 Running I/O for 1 seconds... 00:20:32.243 Running I/O for 1 seconds... 00:20:32.243 Running I/O for 1 seconds... 00:20:32.243 Running I/O for 1 seconds... 00:20:33.183 00:20:33.183 Latency(us) 00:20:33.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.183 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:20:33.183 Nvme1n1 : 1.00 18298.81 71.48 0.00 0.00 6973.26 4860.59 16711.68 00:20:33.183 =================================================================================================================== 00:20:33.183 Total : 18298.81 71.48 0.00 0.00 6973.26 4860.59 16711.68 00:20:33.183 00:20:33.183 Latency(us) 00:20:33.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.183 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:20:33.183 Nvme1n1 : 1.01 16442.73 64.23 0.00 0.00 7759.01 5707.09 18350.08 00:20:33.183 =================================================================================================================== 00:20:33.183 Total : 16442.73 64.23 0.00 0.00 7759.01 5707.09 18350.08 00:20:33.443 00:20:33.443 Latency(us) 00:20:33.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.443 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:20:33.443 Nvme1n1 : 1.00 189833.92 741.54 0.00 0.00 671.75 266.24 2443.95 00:20:33.443 =================================================================================================================== 00:20:33.443 Total : 189833.92 741.54 0.00 0.00 671.75 266.24 2443.95 00:20:33.443 00:20:33.443 Latency(us) 00:20:33.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.443 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:20:33.443 Nvme1n1 : 1.00 19869.01 77.61 0.00 0.00 6425.86 3604.48 17803.95 00:20:33.443 =================================================================================================================== 00:20:33.443 Total : 19869.01 77.61 0.00 0.00 6425.86 3604.48 17803.95 00:20:33.443 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2113179 00:20:33.443 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2113182 00:20:33.443 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2113185 00:20:33.443 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:33.704 rmmod nvme_rdma 00:20:33.704 rmmod nvme_fabrics 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2113091 ']' 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2113091 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 2113091 ']' 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 2113091 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2113091 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2113091' 00:20:33.704 killing process with pid 2113091 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 2113091 00:20:33.704 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 2113091 00:20:33.965 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.965 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:33.965 00:20:33.965 real 0m10.444s 00:20:33.965 user 0m19.676s 00:20:33.965 sys 0m6.255s 00:20:33.965 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:33.965 13:48:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:20:33.965 ************************************ 00:20:33.965 END TEST nvmf_bdev_io_wait 00:20:33.965 ************************************ 00:20:33.965 13:48:26 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:20:33.965 13:48:26 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:33.965 13:48:26 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:33.965 13:48:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:33.965 ************************************ 00:20:33.965 START TEST nvmf_queue_depth 00:20:33.965 ************************************ 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:20:33.965 * Looking for test storage... 00:20:33.965 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.965 13:48:26 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.966 13:48:26 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.966 13:48:26 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.966 13:48:26 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:20:33.966 13:48:26 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.966 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:20:33.966 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.966 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.966 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.966 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.966 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.966 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.227 13:48:26 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:20:42.368 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:20:42.368 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:20:42.368 Found net devices under 0000:98:00.0: mlx_0_0 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:20:42.368 Found net devices under 0000:98:00.1: mlx_0_1 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:42.368 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:42.369 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:42.369 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:20:42.369 altname enp152s0f0np0 00:20:42.369 altname ens817f0np0 00:20:42.369 inet 192.168.100.8/24 scope global mlx_0_0 00:20:42.369 valid_lft forever preferred_lft forever 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:42.369 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:42.369 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:20:42.369 altname enp152s0f1np1 00:20:42.369 altname ens817f1np1 00:20:42.369 inet 192.168.100.9/24 scope global mlx_0_1 00:20:42.369 valid_lft forever preferred_lft forever 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:42.369 192.168.100.9' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:42.369 192.168.100.9' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:42.369 192.168.100.9' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:42.369 13:48:33 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2117558 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2117558 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2117558 ']' 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:42.369 [2024-06-11 13:48:34.070190] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:42.369 [2024-06-11 13:48:34.070258] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.369 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.369 [2024-06-11 13:48:34.154417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.369 [2024-06-11 13:48:34.247111] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.369 [2024-06-11 13:48:34.247166] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.369 [2024-06-11 13:48:34.247174] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.369 [2024-06-11 13:48:34.247181] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.369 [2024-06-11 13:48:34.247187] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.369 [2024-06-11 13:48:34.247218] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.369 13:48:34 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:42.370 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.370 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:42.370 [2024-06-11 13:48:34.936639] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24d4fb0/0x24d94a0) succeed. 00:20:42.370 [2024-06-11 13:48:34.949157] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24d64b0/0x251ab30) succeed. 00:20:42.370 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.370 13:48:34 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:42.370 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.370 13:48:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:42.370 Malloc0 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:42.370 [2024-06-11 13:48:35.040511] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2117612 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2117612 /var/tmp/bdevperf.sock 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2117612 ']' 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:42.370 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:42.370 [2024-06-11 13:48:35.093891] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:20:42.370 [2024-06-11 13:48:35.093955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117612 ] 00:20:42.370 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.370 [2024-06-11 13:48:35.159960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.370 [2024-06-11 13:48:35.235275] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.312 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:43.312 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:20:43.312 13:48:35 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:43.312 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.312 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:43.312 NVMe0n1 00:20:43.312 13:48:35 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.312 13:48:35 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:43.312 Running I/O for 10 seconds... 00:20:53.387 00:20:53.387 Latency(us) 00:20:53.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.387 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:20:53.387 Verification LBA range: start 0x0 length 0x4000 00:20:53.387 NVMe0n1 : 10.04 15610.28 60.98 0.00 0.00 65419.72 21408.43 46312.11 00:20:53.387 =================================================================================================================== 00:20:53.387 Total : 15610.28 60.98 0.00 0.00 65419.72 21408.43 46312.11 00:20:53.387 0 00:20:53.387 13:48:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2117612 00:20:53.387 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2117612 ']' 00:20:53.387 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2117612 00:20:53.387 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:20:53.387 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:53.387 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2117612 00:20:53.387 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:53.387 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:53.387 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2117612' 00:20:53.387 killing process with pid 2117612 00:20:53.387 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2117612 00:20:53.387 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.387 00:20:53.387 Latency(us) 00:20:53.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.387 =================================================================================================================== 00:20:53.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.387 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2117612 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:53.648 rmmod nvme_rdma 00:20:53.648 rmmod nvme_fabrics 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2117558 ']' 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2117558 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2117558 ']' 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2117558 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2117558 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2117558' 00:20:53.648 killing process with pid 2117558 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2117558 00:20:53.648 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2117558 00:20:53.910 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:53.910 13:48:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:53.910 00:20:53.910 real 0m19.853s 00:20:53.910 user 0m26.116s 00:20:53.910 sys 0m5.914s 00:20:53.910 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:53.910 13:48:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:20:53.910 ************************************ 00:20:53.910 END TEST nvmf_queue_depth 00:20:53.910 ************************************ 00:20:53.910 13:48:46 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:20:53.910 13:48:46 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:53.910 13:48:46 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:53.910 13:48:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:53.910 ************************************ 00:20:53.910 START TEST nvmf_target_multipath 00:20:53.910 ************************************ 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:20:53.910 * Looking for test storage... 00:20:53.910 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:20:53.910 13:48:46 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:02.061 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:02.061 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:02.061 Found net devices under 0000:98:00.0: mlx_0_0 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.061 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:02.062 Found net devices under 0000:98:00.1: mlx_0_1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:02.062 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:02.062 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:02.062 altname enp152s0f0np0 00:21:02.062 altname ens817f0np0 00:21:02.062 inet 192.168.100.8/24 scope global mlx_0_0 00:21:02.062 valid_lft forever preferred_lft forever 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:02.062 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:02.062 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:02.062 altname enp152s0f1np1 00:21:02.062 altname ens817f1np1 00:21:02.062 inet 192.168.100.9/24 scope global mlx_0_1 00:21:02.062 valid_lft forever preferred_lft forever 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:02.062 192.168.100.9' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:02.062 192.168.100.9' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:02.062 192.168.100.9' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:21:02.062 run this test only with TCP transport for now 00:21:02.062 13:48:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:02.063 rmmod nvme_rdma 00:21:02.063 rmmod nvme_fabrics 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:02.063 00:21:02.063 real 0m7.291s 00:21:02.063 user 0m2.095s 00:21:02.063 sys 0m5.272s 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:02.063 13:48:53 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:02.063 ************************************ 00:21:02.063 END TEST nvmf_target_multipath 00:21:02.063 ************************************ 00:21:02.063 13:48:53 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:21:02.063 13:48:53 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:02.063 13:48:53 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:02.063 13:48:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:02.063 ************************************ 00:21:02.063 START TEST nvmf_zcopy 00:21:02.063 ************************************ 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:21:02.063 * Looking for test storage... 00:21:02.063 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:21:02.063 13:48:54 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.647 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:08.648 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:08.648 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:08.648 Found net devices under 0000:98:00.0: mlx_0_0 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:08.648 Found net devices under 0000:98:00.1: mlx_0_1 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:08.648 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:08.648 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:08.648 altname enp152s0f0np0 00:21:08.648 altname ens817f0np0 00:21:08.648 inet 192.168.100.8/24 scope global mlx_0_0 00:21:08.648 valid_lft forever preferred_lft forever 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:08.648 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:08.648 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:08.648 altname enp152s0f1np1 00:21:08.648 altname ens817f1np1 00:21:08.648 inet 192.168.100.9/24 scope global mlx_0_1 00:21:08.648 valid_lft forever preferred_lft forever 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.648 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:08.649 192.168.100.9' 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:08.649 192.168.100.9' 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:08.649 192.168.100.9' 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2127372 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2127372 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 2127372 ']' 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:08.649 13:49:01 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:08.649 [2024-06-11 13:49:01.310950] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:21:08.649 [2024-06-11 13:49:01.311047] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.649 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.649 [2024-06-11 13:49:01.401964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.649 [2024-06-11 13:49:01.495381] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.649 [2024-06-11 13:49:01.495443] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.649 [2024-06-11 13:49:01.495451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.649 [2024-06-11 13:49:01.495458] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.649 [2024-06-11 13:49:01.495464] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.649 [2024-06-11 13:49:01.495499] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.219 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:09.219 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:21:09.219 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.219 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:09.219 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:21:09.481 Unsupported transport: rdma 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@807 -- # type=--id 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@808 -- # id=0 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # for n in $shm_files 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:09.481 nvmf_trace.0 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@822 -- # return 0 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:09.481 rmmod nvme_rdma 00:21:09.481 rmmod nvme_fabrics 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2127372 ']' 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2127372 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 2127372 ']' 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 2127372 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2127372 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2127372' 00:21:09.481 killing process with pid 2127372 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 2127372 00:21:09.481 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 2127372 00:21:09.741 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:09.741 13:49:02 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:09.741 00:21:09.741 real 0m8.429s 00:21:09.741 user 0m3.429s 00:21:09.741 sys 0m5.620s 00:21:09.741 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:09.741 13:49:02 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:09.741 ************************************ 00:21:09.741 END TEST nvmf_zcopy 00:21:09.741 ************************************ 00:21:09.741 13:49:02 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:21:09.742 13:49:02 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:09.742 13:49:02 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:09.742 13:49:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:09.742 ************************************ 00:21:09.742 START TEST nvmf_nmic 00:21:09.742 ************************************ 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:21:09.742 * Looking for test storage... 00:21:09.742 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.742 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.003 13:49:02 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:16.595 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:16.595 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:16.595 Found net devices under 0000:98:00.0: mlx_0_0 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:16.595 Found net devices under 0000:98:00.1: mlx_0_1 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.595 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:16.596 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:16.596 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:16.596 altname enp152s0f0np0 00:21:16.596 altname ens817f0np0 00:21:16.596 inet 192.168.100.8/24 scope global mlx_0_0 00:21:16.596 valid_lft forever preferred_lft forever 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:16.596 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:16.596 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:16.596 altname enp152s0f1np1 00:21:16.596 altname ens817f1np1 00:21:16.596 inet 192.168.100.9/24 scope global mlx_0_1 00:21:16.596 valid_lft forever preferred_lft forever 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:16.596 192.168.100.9' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:16.596 192.168.100.9' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:16.596 192.168.100.9' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2131425 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2131425 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 2131425 ']' 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:16.596 13:49:09 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:16.596 [2024-06-11 13:49:09.456995] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:21:16.596 [2024-06-11 13:49:09.457069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.596 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.857 [2024-06-11 13:49:09.523370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:16.857 [2024-06-11 13:49:09.598848] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.857 [2024-06-11 13:49:09.598887] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.857 [2024-06-11 13:49:09.598895] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.857 [2024-06-11 13:49:09.598901] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.857 [2024-06-11 13:49:09.598907] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.857 [2024-06-11 13:49:09.599057] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.857 [2024-06-11 13:49:09.599094] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.857 [2024-06-11 13:49:09.599147] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.858 [2024-06-11 13:49:09.599147] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.430 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:17.430 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:21:17.430 13:49:10 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.430 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:17.430 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:17.430 13:49:10 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.430 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:17.430 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.430 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:17.430 [2024-06-11 13:49:10.320909] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1955e90/0x195a380) succeed. 00:21:17.430 [2024-06-11 13:49:10.335358] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19574d0/0x199ba10) succeed. 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:17.691 Malloc0 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:17.691 [2024-06-11 13:49:10.510083] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:21:17.691 test case1: single bdev can't be used in multiple subsystems 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.691 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:17.691 [2024-06-11 13:49:10.545908] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:21:17.691 [2024-06-11 13:49:10.545928] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:21:17.691 [2024-06-11 13:49:10.545935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.692 request: 00:21:17.692 { 00:21:17.692 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:21:17.692 "namespace": { 00:21:17.692 "bdev_name": "Malloc0", 00:21:17.692 "no_auto_visible": false 00:21:17.692 }, 00:21:17.692 "method": "nvmf_subsystem_add_ns", 00:21:17.692 "req_id": 1 00:21:17.692 } 00:21:17.692 Got JSON-RPC error response 00:21:17.692 response: 00:21:17.692 { 00:21:17.692 "code": -32602, 00:21:17.692 "message": "Invalid parameters" 00:21:17.692 } 00:21:17.692 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:21:17.692 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:21:17.692 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:21:17.692 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:21:17.692 Adding namespace failed - expected result. 00:21:17.692 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:21:17.692 test case2: host connect to nvmf target in multiple paths 00:21:17.692 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:17.692 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.692 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:17.692 [2024-06-11 13:49:10.557956] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:17.692 13:49:10 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.692 13:49:10 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:19.075 13:49:11 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:21:20.986 13:49:13 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:21:20.986 13:49:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:21:20.986 13:49:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:21:20.986 13:49:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:21:20.986 13:49:13 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:21:22.897 13:49:15 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:21:22.897 13:49:15 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:22.897 13:49:15 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:21:22.897 13:49:15 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:21:22.897 13:49:15 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:21:22.897 13:49:15 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:21:22.897 13:49:15 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:22.897 [global] 00:21:22.897 thread=1 00:21:22.897 invalidate=1 00:21:22.897 rw=write 00:21:22.897 time_based=1 00:21:22.897 runtime=1 00:21:22.897 ioengine=libaio 00:21:22.897 direct=1 00:21:22.897 bs=4096 00:21:22.897 iodepth=1 00:21:22.897 norandommap=0 00:21:22.897 numjobs=1 00:21:22.897 00:21:22.897 verify_dump=1 00:21:22.897 verify_backlog=512 00:21:22.897 verify_state_save=0 00:21:22.897 do_verify=1 00:21:22.897 verify=crc32c-intel 00:21:22.897 [job0] 00:21:22.897 filename=/dev/nvme0n1 00:21:22.897 Could not set queue depth (nvme0n1) 00:21:23.157 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:23.157 fio-3.35 00:21:23.157 Starting 1 thread 00:21:24.096 00:21:24.096 job0: (groupid=0, jobs=1): err= 0: pid=2132774: Tue Jun 11 13:49:16 2024 00:21:24.096 read: IOPS=7988, BW=31.2MiB/s (32.7MB/s)(31.2MiB/1001msec) 00:21:24.096 slat (nsec): min=5510, max=32573, avg=5931.42, stdev=738.10 00:21:24.096 clat (nsec): min=34396, max=81895, avg=53042.00, stdev=3660.07 00:21:24.096 lat (usec): min=51, max=114, avg=58.97, stdev= 3.72 00:21:24.096 clat percentiles (nsec): 00:21:24.096 | 1.00th=[46848], 5.00th=[47872], 10.00th=[48896], 20.00th=[49920], 00:21:24.096 | 30.00th=[50944], 40.00th=[51456], 50.00th=[52480], 60.00th=[53504], 00:21:24.096 | 70.00th=[54528], 80.00th=[56064], 90.00th=[58112], 95.00th=[59648], 00:21:24.096 | 99.00th=[62720], 99.50th=[63744], 99.90th=[68096], 99.95th=[71168], 00:21:24.096 | 99.99th=[81408] 00:21:24.096 write: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec); 0 zone resets 00:21:24.096 slat (nsec): min=7784, max=46869, avg=8393.57, stdev=963.86 00:21:24.096 clat (usec): min=29, max=236, avg=51.95, stdev= 6.72 00:21:24.096 lat (usec): min=51, max=245, avg=60.35, stdev= 6.84 00:21:24.096 clat percentiles (usec): 00:21:24.096 | 1.00th=[ 45], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 49], 00:21:24.096 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:21:24.096 | 70.00th=[ 54], 80.00th=[ 56], 90.00th=[ 58], 95.00th=[ 59], 00:21:24.096 | 99.00th=[ 63], 99.50th=[ 70], 99.90th=[ 190], 99.95th=[ 208], 00:21:24.096 | 99.99th=[ 237] 00:21:24.096 bw ( KiB/s): min=32768, max=32768, per=100.00%, avg=32768.00, stdev= 0.00, samples=1 00:21:24.096 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:21:24.096 lat (usec) : 50=29.19%, 100=70.74%, 250=0.07% 00:21:24.096 cpu : usr=10.10%, sys=15.50%, ctx=16189, majf=0, minf=1 00:21:24.096 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:24.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.096 issued rwts: total=7996,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.096 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:24.096 00:21:24.096 Run status group 0 (all jobs): 00:21:24.096 READ: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=31.2MiB (32.8MB), run=1001-1001msec 00:21:24.096 WRITE: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:21:24.096 00:21:24.096 Disk stats (read/write): 00:21:24.096 nvme0n1: ios=7218/7383, merge=0/0, ticks=326/320, in_queue=646, util=90.68% 00:21:24.096 13:49:16 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:27.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:27.395 rmmod nvme_rdma 00:21:27.395 rmmod nvme_fabrics 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2131425 ']' 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2131425 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 2131425 ']' 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 2131425 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2131425 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2131425' 00:21:27.395 killing process with pid 2131425 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 2131425 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 2131425 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:27.395 00:21:27.395 real 0m17.434s 00:21:27.395 user 0m59.940s 00:21:27.395 sys 0m5.820s 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:27.395 13:49:19 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:21:27.395 ************************************ 00:21:27.395 END TEST nvmf_nmic 00:21:27.395 ************************************ 00:21:27.395 13:49:20 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:21:27.395 13:49:20 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:27.395 13:49:20 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:27.395 13:49:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:27.395 ************************************ 00:21:27.395 START TEST nvmf_fio_target 00:21:27.395 ************************************ 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:21:27.395 * Looking for test storage... 00:21:27.395 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.395 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.396 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.396 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.396 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.396 13:49:20 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.396 13:49:20 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.396 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:27.396 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:27.396 13:49:20 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.396 13:49:20 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:35.537 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:35.537 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:35.537 Found net devices under 0000:98:00.0: mlx_0_0 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:35.537 Found net devices under 0000:98:00.1: mlx_0_1 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:35.537 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:35.538 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:35.538 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:35.538 altname enp152s0f0np0 00:21:35.538 altname ens817f0np0 00:21:35.538 inet 192.168.100.8/24 scope global mlx_0_0 00:21:35.538 valid_lft forever preferred_lft forever 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:35.538 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:35.538 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:35.538 altname enp152s0f1np1 00:21:35.538 altname ens817f1np1 00:21:35.538 inet 192.168.100.9/24 scope global mlx_0_1 00:21:35.538 valid_lft forever preferred_lft forever 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:35.538 192.168.100.9' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:35.538 192.168.100.9' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:35.538 192.168.100.9' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2137368 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2137368 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 2137368 ']' 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:35.538 13:49:27 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.538 [2024-06-11 13:49:27.339067] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:21:35.538 [2024-06-11 13:49:27.339155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.539 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.539 [2024-06-11 13:49:27.407122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.539 [2024-06-11 13:49:27.481683] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.539 [2024-06-11 13:49:27.481722] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.539 [2024-06-11 13:49:27.481729] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.539 [2024-06-11 13:49:27.481736] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.539 [2024-06-11 13:49:27.481741] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.539 [2024-06-11 13:49:27.481881] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.539 [2024-06-11 13:49:27.482015] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.539 [2024-06-11 13:49:27.482163] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.539 [2024-06-11 13:49:27.482164] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.539 13:49:28 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:35.539 13:49:28 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:21:35.539 13:49:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.539 13:49:28 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:35.539 13:49:28 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.539 13:49:28 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.539 13:49:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:35.539 [2024-06-11 13:49:28.327259] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23f5e90/0x23fa380) succeed. 00:21:35.539 [2024-06-11 13:49:28.342287] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23f74d0/0x243ba10) succeed. 00:21:35.800 13:49:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:35.800 13:49:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:21:35.800 13:49:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:36.061 13:49:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:21:36.061 13:49:28 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:36.322 13:49:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:21:36.322 13:49:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:36.322 13:49:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:21:36.322 13:49:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:21:36.583 13:49:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:36.844 13:49:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:21:36.844 13:49:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:36.844 13:49:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:21:36.844 13:49:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:37.130 13:49:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:21:37.130 13:49:29 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:21:37.396 13:49:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:37.396 13:49:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:37.396 13:49:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:37.656 13:49:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:37.656 13:49:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:37.918 13:49:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:37.918 [2024-06-11 13:49:30.722960] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:37.918 13:49:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:21:38.179 13:49:30 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:21:38.179 13:49:31 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:40.088 13:49:32 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:21:40.088 13:49:32 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:21:40.088 13:49:32 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:21:40.088 13:49:32 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:21:40.088 13:49:32 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:21:40.088 13:49:32 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:21:42.023 13:49:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:21:42.023 13:49:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:42.023 13:49:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:21:42.023 13:49:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:21:42.023 13:49:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:21:42.023 13:49:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:21:42.023 13:49:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:42.023 [global] 00:21:42.023 thread=1 00:21:42.023 invalidate=1 00:21:42.023 rw=write 00:21:42.023 time_based=1 00:21:42.023 runtime=1 00:21:42.023 ioengine=libaio 00:21:42.023 direct=1 00:21:42.023 bs=4096 00:21:42.023 iodepth=1 00:21:42.023 norandommap=0 00:21:42.023 numjobs=1 00:21:42.023 00:21:42.023 verify_dump=1 00:21:42.023 verify_backlog=512 00:21:42.023 verify_state_save=0 00:21:42.023 do_verify=1 00:21:42.023 verify=crc32c-intel 00:21:42.023 [job0] 00:21:42.023 filename=/dev/nvme0n1 00:21:42.023 [job1] 00:21:42.023 filename=/dev/nvme0n2 00:21:42.023 [job2] 00:21:42.023 filename=/dev/nvme0n3 00:21:42.023 [job3] 00:21:42.023 filename=/dev/nvme0n4 00:21:42.023 Could not set queue depth (nvme0n1) 00:21:42.023 Could not set queue depth (nvme0n2) 00:21:42.023 Could not set queue depth (nvme0n3) 00:21:42.023 Could not set queue depth (nvme0n4) 00:21:42.283 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:42.283 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:42.283 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:42.283 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:42.283 fio-3.35 00:21:42.283 Starting 4 threads 00:21:43.667 00:21:43.667 job0: (groupid=0, jobs=1): err= 0: pid=2139064: Tue Jun 11 13:49:36 2024 00:21:43.667 read: IOPS=4518, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1001msec) 00:21:43.667 slat (nsec): min=5548, max=53966, avg=8975.17, stdev=6778.38 00:21:43.667 clat (usec): min=46, max=483, avg=99.18, stdev=56.91 00:21:43.667 lat (usec): min=52, max=516, avg=108.15, stdev=61.99 00:21:43.667 clat percentiles (usec): 00:21:43.667 | 1.00th=[ 55], 5.00th=[ 65], 10.00th=[ 69], 20.00th=[ 74], 00:21:43.667 | 30.00th=[ 78], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 89], 00:21:43.667 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 255], 00:21:43.667 | 99.00th=[ 355], 99.50th=[ 383], 99.90th=[ 424], 99.95th=[ 445], 00:21:43.667 | 99.99th=[ 486] 00:21:43.667 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:21:43.667 slat (nsec): min=7828, max=80980, avg=11613.90, stdev=7088.70 00:21:43.667 clat (usec): min=44, max=436, avg=93.38, stdev=53.67 00:21:43.667 lat (usec): min=53, max=469, avg=105.00, stdev=59.08 00:21:43.667 clat percentiles (usec): 00:21:43.667 | 1.00th=[ 51], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 68], 00:21:43.667 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 84], 60.00th=[ 87], 00:21:43.667 | 70.00th=[ 90], 80.00th=[ 94], 90.00th=[ 103], 95.00th=[ 249], 00:21:43.667 | 99.00th=[ 330], 99.50th=[ 363], 99.90th=[ 416], 99.95th=[ 424], 00:21:43.667 | 99.99th=[ 437] 00:21:43.667 bw ( KiB/s): min=20480, max=20480, per=34.76%, avg=20480.00, stdev= 0.00, samples=1 00:21:43.667 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:21:43.667 lat (usec) : 50=0.55%, 100=86.50%, 250=7.87%, 500=5.08% 00:21:43.667 cpu : usr=7.00%, sys=13.60%, ctx=9132, majf=0, minf=1 00:21:43.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:43.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.667 issued rwts: total=4523,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:43.667 job1: (groupid=0, jobs=1): err= 0: pid=2139080: Tue Jun 11 13:49:36 2024 00:21:43.667 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:21:43.667 slat (nsec): min=2703, max=47967, avg=13469.53, stdev=10731.54 00:21:43.667 clat (usec): min=31, max=423, avg=139.99, stdev=78.69 00:21:43.667 lat (usec): min=44, max=440, avg=153.46, stdev=86.40 00:21:43.667 clat percentiles (usec): 00:21:43.667 | 1.00th=[ 49], 5.00th=[ 58], 10.00th=[ 81], 20.00th=[ 86], 00:21:43.667 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 95], 60.00th=[ 101], 00:21:43.667 | 70.00th=[ 194], 80.00th=[ 233], 90.00th=[ 258], 95.00th=[ 277], 00:21:43.667 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 412], 99.95th=[ 416], 00:21:43.667 | 99.99th=[ 424] 00:21:43.667 write: IOPS=3351, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec); 0 zone resets 00:21:43.667 slat (nsec): min=7819, max=51682, avg=15531.04, stdev=11078.85 00:21:43.667 clat (usec): min=45, max=445, avg=134.32, stdev=77.41 00:21:43.668 lat (usec): min=53, max=454, avg=149.85, stdev=85.36 00:21:43.668 clat percentiles (usec): 00:21:43.668 | 1.00th=[ 50], 5.00th=[ 70], 10.00th=[ 77], 20.00th=[ 83], 00:21:43.668 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 96], 00:21:43.668 | 70.00th=[ 159], 80.00th=[ 231], 90.00th=[ 255], 95.00th=[ 281], 00:21:43.668 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 412], 99.95th=[ 429], 00:21:43.668 | 99.99th=[ 445] 00:21:43.668 bw ( KiB/s): min=13544, max=13544, per=22.99%, avg=13544.00, stdev= 0.00, samples=1 00:21:43.668 iops : min= 3386, max= 3386, avg=3386.00, stdev= 0.00, samples=1 00:21:43.668 lat (usec) : 50=1.17%, 100=61.19%, 250=25.58%, 500=12.06% 00:21:43.668 cpu : usr=6.30%, sys=13.30%, ctx=6427, majf=0, minf=1 00:21:43.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:43.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.668 issued rwts: total=3072,3355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:43.668 job2: (groupid=0, jobs=1): err= 0: pid=2139096: Tue Jun 11 13:49:36 2024 00:21:43.668 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:21:43.668 slat (nsec): min=5834, max=47397, avg=20467.37, stdev=11232.38 00:21:43.668 clat (usec): min=72, max=497, avg=214.72, stdev=69.96 00:21:43.668 lat (usec): min=78, max=503, avg=235.18, stdev=70.76 00:21:43.668 clat percentiles (usec): 00:21:43.668 | 1.00th=[ 78], 5.00th=[ 85], 10.00th=[ 108], 20.00th=[ 190], 00:21:43.668 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 215], 60.00th=[ 231], 00:21:43.668 | 70.00th=[ 247], 80.00th=[ 269], 90.00th=[ 306], 95.00th=[ 330], 00:21:43.668 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 424], 99.95th=[ 486], 00:21:43.668 | 99.99th=[ 498] 00:21:43.668 write: IOPS=2170, BW=8683KiB/s (8892kB/s)(8692KiB/1001msec); 0 zone resets 00:21:43.668 slat (nsec): min=8069, max=67846, avg=22034.66, stdev=12396.94 00:21:43.668 clat (usec): min=66, max=447, avg=205.26, stdev=83.95 00:21:43.668 lat (usec): min=74, max=455, avg=227.30, stdev=88.61 00:21:43.668 clat percentiles (usec): 00:21:43.668 | 1.00th=[ 72], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 90], 00:21:43.668 | 30.00th=[ 178], 40.00th=[ 202], 50.00th=[ 223], 60.00th=[ 239], 00:21:43.668 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 302], 95.00th=[ 330], 00:21:43.668 | 99.00th=[ 392], 99.50th=[ 416], 99.90th=[ 437], 99.95th=[ 441], 00:21:43.668 | 99.99th=[ 449] 00:21:43.668 bw ( KiB/s): min= 9896, max= 9896, per=16.80%, avg=9896.00, stdev= 0.00, samples=1 00:21:43.668 iops : min= 2474, max= 2474, avg=2474.00, stdev= 0.00, samples=1 00:21:43.668 lat (usec) : 100=15.59%, 250=54.49%, 500=29.92% 00:21:43.668 cpu : usr=5.70%, sys=12.80%, ctx=4221, majf=0, minf=1 00:21:43.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:43.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.668 issued rwts: total=2048,2173,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:43.668 job3: (groupid=0, jobs=1): err= 0: pid=2139102: Tue Jun 11 13:49:36 2024 00:21:43.668 read: IOPS=4322, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1001msec) 00:21:43.668 slat (nsec): min=5951, max=53046, avg=9652.14, stdev=7118.19 00:21:43.668 clat (usec): min=55, max=482, avg=102.68, stdev=60.40 00:21:43.668 lat (usec): min=64, max=516, avg=112.33, stdev=65.60 00:21:43.668 clat percentiles (usec): 00:21:43.668 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 75], 00:21:43.668 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 90], 00:21:43.668 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 120], 95.00th=[ 269], 00:21:43.668 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 420], 99.95th=[ 437], 00:21:43.668 | 99.99th=[ 482] 00:21:43.668 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:21:43.668 slat (nsec): min=8248, max=76300, avg=11637.58, stdev=6856.22 00:21:43.668 clat (usec): min=51, max=426, avg=94.46, stdev=50.47 00:21:43.668 lat (usec): min=60, max=456, avg=106.10, stdev=55.65 00:21:43.668 clat percentiles (usec): 00:21:43.668 | 1.00th=[ 57], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 72], 00:21:43.668 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 87], 00:21:43.668 | 70.00th=[ 90], 80.00th=[ 94], 90.00th=[ 102], 95.00th=[ 245], 00:21:43.668 | 99.00th=[ 318], 99.50th=[ 343], 99.90th=[ 388], 99.95th=[ 396], 00:21:43.668 | 99.99th=[ 429] 00:21:43.668 bw ( KiB/s): min=19152, max=19152, per=32.51%, avg=19152.00, stdev= 0.00, samples=1 00:21:43.668 iops : min= 4788, max= 4788, avg=4788.00, stdev= 0.00, samples=1 00:21:43.668 lat (usec) : 100=86.85%, 250=7.88%, 500=5.27% 00:21:43.668 cpu : usr=6.60%, sys=13.40%, ctx=8935, majf=0, minf=1 00:21:43.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:43.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.668 issued rwts: total=4327,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:43.668 00:21:43.668 Run status group 0 (all jobs): 00:21:43.668 READ: bw=54.5MiB/s (57.2MB/s), 8184KiB/s-17.6MiB/s (8380kB/s-18.5MB/s), io=54.6MiB (57.2MB), run=1001-1001msec 00:21:43.668 WRITE: bw=57.5MiB/s (60.3MB/s), 8683KiB/s-18.0MiB/s (8892kB/s-18.9MB/s), io=57.6MiB (60.4MB), run=1001-1001msec 00:21:43.668 00:21:43.668 Disk stats (read/write): 00:21:43.668 nvme0n1: ios=3634/3816, merge=0/0, ticks=337/299, in_queue=636, util=85.67% 00:21:43.668 nvme0n2: ios=2560/2915, merge=0/0, ticks=254/269, in_queue=523, util=86.25% 00:21:43.668 nvme0n3: ios=1544/2048, merge=0/0, ticks=232/267, in_queue=499, util=88.78% 00:21:43.668 nvme0n4: ios=3584/3619, merge=0/0, ticks=336/299, in_queue=635, util=89.63% 00:21:43.668 13:49:36 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:21:43.668 [global] 00:21:43.668 thread=1 00:21:43.668 invalidate=1 00:21:43.668 rw=randwrite 00:21:43.668 time_based=1 00:21:43.668 runtime=1 00:21:43.668 ioengine=libaio 00:21:43.668 direct=1 00:21:43.668 bs=4096 00:21:43.668 iodepth=1 00:21:43.668 norandommap=0 00:21:43.668 numjobs=1 00:21:43.668 00:21:43.668 verify_dump=1 00:21:43.668 verify_backlog=512 00:21:43.668 verify_state_save=0 00:21:43.668 do_verify=1 00:21:43.668 verify=crc32c-intel 00:21:43.668 [job0] 00:21:43.668 filename=/dev/nvme0n1 00:21:43.668 [job1] 00:21:43.668 filename=/dev/nvme0n2 00:21:43.668 [job2] 00:21:43.668 filename=/dev/nvme0n3 00:21:43.668 [job3] 00:21:43.668 filename=/dev/nvme0n4 00:21:43.668 Could not set queue depth (nvme0n1) 00:21:43.668 Could not set queue depth (nvme0n2) 00:21:43.668 Could not set queue depth (nvme0n3) 00:21:43.668 Could not set queue depth (nvme0n4) 00:21:43.930 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:43.930 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:43.930 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:43.930 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:43.930 fio-3.35 00:21:43.930 Starting 4 threads 00:21:45.317 00:21:45.317 job0: (groupid=0, jobs=1): err= 0: pid=2139546: Tue Jun 11 13:49:37 2024 00:21:45.317 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:21:45.317 slat (nsec): min=5726, max=49193, avg=13759.89, stdev=10850.82 00:21:45.317 clat (usec): min=45, max=452, avg=139.33, stdev=87.10 00:21:45.317 lat (usec): min=51, max=462, avg=153.09, stdev=94.11 00:21:45.317 clat percentiles (usec): 00:21:45.317 | 1.00th=[ 51], 5.00th=[ 55], 10.00th=[ 65], 20.00th=[ 72], 00:21:45.317 | 30.00th=[ 76], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 110], 00:21:45.317 | 70.00th=[ 200], 80.00th=[ 235], 90.00th=[ 265], 95.00th=[ 293], 00:21:45.317 | 99.00th=[ 363], 99.50th=[ 383], 99.90th=[ 404], 99.95th=[ 433], 00:21:45.317 | 99.99th=[ 453] 00:21:45.317 write: IOPS=2869, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec); 0 zone resets 00:21:45.317 slat (nsec): min=7693, max=65600, avg=20604.69, stdev=12185.94 00:21:45.317 clat (usec): min=45, max=483, avg=182.08, stdev=98.94 00:21:45.317 lat (usec): min=53, max=491, avg=202.69, stdev=105.87 00:21:45.317 clat percentiles (usec): 00:21:45.317 | 1.00th=[ 50], 5.00th=[ 60], 10.00th=[ 68], 20.00th=[ 73], 00:21:45.317 | 30.00th=[ 79], 40.00th=[ 117], 50.00th=[ 206], 60.00th=[ 237], 00:21:45.317 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 306], 95.00th=[ 330], 00:21:45.317 | 99.00th=[ 383], 99.50th=[ 404], 99.90th=[ 449], 99.95th=[ 465], 00:21:45.317 | 99.99th=[ 486] 00:21:45.317 bw ( KiB/s): min=12263, max=12263, per=18.38%, avg=12263.00, stdev= 0.00, samples=1 00:21:45.317 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:21:45.317 lat (usec) : 50=0.99%, 100=46.50%, 250=28.76%, 500=23.75% 00:21:45.317 cpu : usr=6.30%, sys=13.50%, ctx=5432, majf=0, minf=1 00:21:45.317 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:45.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.317 issued rwts: total=2560,2872,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.317 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:45.317 job1: (groupid=0, jobs=1): err= 0: pid=2139554: Tue Jun 11 13:49:37 2024 00:21:45.317 read: IOPS=5802, BW=22.7MiB/s (23.8MB/s)(22.7MiB/1001msec) 00:21:45.317 slat (nsec): min=5282, max=46970, avg=7430.60, stdev=5662.91 00:21:45.317 clat (usec): min=42, max=412, avg=71.87, stdev=50.76 00:21:45.317 lat (usec): min=50, max=423, avg=79.30, stdev=55.26 00:21:45.317 clat percentiles (usec): 00:21:45.317 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:21:45.317 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 59], 00:21:45.317 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 223], 00:21:45.317 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[ 383], 99.95th=[ 396], 00:21:45.317 | 99.99th=[ 412] 00:21:45.317 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:21:45.317 slat (nsec): min=7460, max=53325, avg=9641.40, stdev=5887.50 00:21:45.317 clat (usec): min=38, max=457, avg=73.10, stdev=58.66 00:21:45.317 lat (usec): min=50, max=489, avg=82.74, stdev=63.35 00:21:45.317 clat percentiles (usec): 00:21:45.317 | 1.00th=[ 45], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:21:45.317 | 30.00th=[ 51], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 58], 00:21:45.317 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 249], 00:21:45.317 | 99.00th=[ 322], 99.50th=[ 351], 99.90th=[ 429], 99.95th=[ 445], 00:21:45.317 | 99.99th=[ 457] 00:21:45.317 bw ( KiB/s): min=16806, max=16806, per=25.19%, avg=16806.00, stdev= 0.00, samples=1 00:21:45.317 iops : min= 4201, max= 4201, avg=4201.00, stdev= 0.00, samples=1 00:21:45.317 lat (usec) : 50=17.49%, 100=74.69%, 250=3.70%, 500=4.12% 00:21:45.317 cpu : usr=8.40%, sys=12.10%, ctx=11952, majf=0, minf=1 00:21:45.317 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:45.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.317 issued rwts: total=5808,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.317 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:45.317 job2: (groupid=0, jobs=1): err= 0: pid=2139570: Tue Jun 11 13:49:37 2024 00:21:45.317 read: IOPS=2805, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec) 00:21:45.317 slat (nsec): min=5838, max=48142, avg=17253.99, stdev=11669.64 00:21:45.317 clat (usec): min=53, max=459, avg=179.26, stdev=96.79 00:21:45.317 lat (usec): min=59, max=488, avg=196.51, stdev=103.18 00:21:45.317 clat percentiles (usec): 00:21:45.317 | 1.00th=[ 57], 5.00th=[ 62], 10.00th=[ 70], 20.00th=[ 76], 00:21:45.317 | 30.00th=[ 81], 40.00th=[ 105], 50.00th=[ 200], 60.00th=[ 231], 00:21:45.317 | 70.00th=[ 243], 80.00th=[ 265], 90.00th=[ 297], 95.00th=[ 338], 00:21:45.317 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 437], 99.95th=[ 449], 00:21:45.317 | 99.99th=[ 461] 00:21:45.317 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:21:45.317 slat (nsec): min=7813, max=51726, avg=14064.82, stdev=10367.63 00:21:45.317 clat (usec): min=49, max=455, avg=123.66, stdev=88.84 00:21:45.317 lat (usec): min=58, max=477, avg=137.72, stdev=96.33 00:21:45.317 clat percentiles (usec): 00:21:45.317 | 1.00th=[ 52], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 63], 00:21:45.317 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 77], 60.00th=[ 83], 00:21:45.317 | 70.00th=[ 110], 80.00th=[ 221], 90.00th=[ 265], 95.00th=[ 310], 00:21:45.317 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 433], 99.95th=[ 445], 00:21:45.317 | 99.99th=[ 457] 00:21:45.317 bw ( KiB/s): min=13696, max=13696, per=20.53%, avg=13696.00, stdev= 0.00, samples=1 00:21:45.317 iops : min= 3424, max= 3424, avg=3424.00, stdev= 0.00, samples=1 00:21:45.317 lat (usec) : 50=0.02%, 100=54.46%, 250=25.41%, 500=20.12% 00:21:45.317 cpu : usr=6.40%, sys=12.90%, ctx=5880, majf=0, minf=1 00:21:45.317 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:45.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.317 issued rwts: total=2808,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.317 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:45.317 job3: (groupid=0, jobs=1): err= 0: pid=2139576: Tue Jun 11 13:49:37 2024 00:21:45.317 read: IOPS=4265, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1001msec) 00:21:45.317 slat (nsec): min=5680, max=47409, avg=8592.22, stdev=6885.00 00:21:45.317 clat (usec): min=42, max=538, avg=89.15, stdev=64.81 00:21:45.317 lat (usec): min=57, max=544, avg=97.74, stdev=70.03 00:21:45.317 clat percentiles (usec): 00:21:45.317 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 60], 00:21:45.317 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 69], 00:21:45.317 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 202], 95.00th=[ 262], 00:21:45.317 | 99.00th=[ 334], 99.50th=[ 363], 99.90th=[ 416], 99.95th=[ 449], 00:21:45.317 | 99.99th=[ 537] 00:21:45.317 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:21:45.317 slat (nsec): min=7813, max=51405, avg=12741.84, stdev=9211.73 00:21:45.317 clat (usec): min=46, max=500, avg=108.02, stdev=84.73 00:21:45.317 lat (usec): min=57, max=508, avg=120.76, stdev=91.80 00:21:45.317 clat percentiles (usec): 00:21:45.317 | 1.00th=[ 53], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 59], 00:21:45.317 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 72], 00:21:45.317 | 70.00th=[ 78], 80.00th=[ 192], 90.00th=[ 262], 95.00th=[ 293], 00:21:45.317 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 441], 99.95th=[ 453], 00:21:45.317 | 99.99th=[ 502] 00:21:45.317 bw ( KiB/s): min=16351, max=16351, per=24.51%, avg=16351.00, stdev= 0.00, samples=1 00:21:45.317 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:21:45.317 lat (usec) : 50=0.06%, 100=81.21%, 250=9.15%, 500=9.56%, 750=0.02% 00:21:45.317 cpu : usr=7.00%, sys=13.30%, ctx=8878, majf=0, minf=1 00:21:45.317 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:45.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.317 issued rwts: total=4270,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.317 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:45.317 00:21:45.317 Run status group 0 (all jobs): 00:21:45.317 READ: bw=60.3MiB/s (63.2MB/s), 9.99MiB/s-22.7MiB/s (10.5MB/s-23.8MB/s), io=60.3MiB (63.3MB), run=1001-1001msec 00:21:45.317 WRITE: bw=65.2MiB/s (68.3MB/s), 11.2MiB/s-24.0MiB/s (11.8MB/s-25.1MB/s), io=65.2MiB (68.4MB), run=1001-1001msec 00:21:45.317 00:21:45.317 Disk stats (read/write): 00:21:45.317 nvme0n1: ios=2098/2427, merge=0/0, ticks=213/274, in_queue=487, util=86.17% 00:21:45.318 nvme0n2: ios=4608/4967, merge=0/0, ticks=265/279, in_queue=544, util=86.28% 00:21:45.318 nvme0n3: ios=2342/2560, merge=0/0, ticks=279/238, in_queue=517, util=88.71% 00:21:45.318 nvme0n4: ios=3584/3904, merge=0/0, ticks=249/293, in_queue=542, util=89.65% 00:21:45.318 13:49:37 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:45.318 [global] 00:21:45.318 thread=1 00:21:45.318 invalidate=1 00:21:45.318 rw=write 00:21:45.318 time_based=1 00:21:45.318 runtime=1 00:21:45.318 ioengine=libaio 00:21:45.318 direct=1 00:21:45.318 bs=4096 00:21:45.318 iodepth=128 00:21:45.318 norandommap=0 00:21:45.318 numjobs=1 00:21:45.318 00:21:45.318 verify_dump=1 00:21:45.318 verify_backlog=512 00:21:45.318 verify_state_save=0 00:21:45.318 do_verify=1 00:21:45.318 verify=crc32c-intel 00:21:45.318 [job0] 00:21:45.318 filename=/dev/nvme0n1 00:21:45.318 [job1] 00:21:45.318 filename=/dev/nvme0n2 00:21:45.318 [job2] 00:21:45.318 filename=/dev/nvme0n3 00:21:45.318 [job3] 00:21:45.318 filename=/dev/nvme0n4 00:21:45.318 Could not set queue depth (nvme0n1) 00:21:45.318 Could not set queue depth (nvme0n2) 00:21:45.318 Could not set queue depth (nvme0n3) 00:21:45.318 Could not set queue depth (nvme0n4) 00:21:45.577 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:45.577 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:45.577 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:45.577 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:45.577 fio-3.35 00:21:45.577 Starting 4 threads 00:21:46.961 00:21:46.961 job0: (groupid=0, jobs=1): err= 0: pid=2140050: Tue Jun 11 13:49:39 2024 00:21:46.961 read: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(40.0MiB/1002msec) 00:21:46.961 slat (nsec): min=1113, max=2786.6k, avg=46581.15, stdev=173881.32 00:21:46.961 clat (usec): min=998, max=25481, avg=5966.77, stdev=2934.56 00:21:46.961 lat (usec): min=1000, max=25484, avg=6013.35, stdev=2957.83 00:21:46.961 clat percentiles (usec): 00:21:46.961 | 1.00th=[ 4686], 5.00th=[ 4948], 10.00th=[ 5145], 20.00th=[ 5342], 00:21:46.961 | 30.00th=[ 5407], 40.00th=[ 5473], 50.00th=[ 5538], 60.00th=[ 5538], 00:21:46.961 | 70.00th=[ 5604], 80.00th=[ 5669], 90.00th=[ 5800], 95.00th=[ 6063], 00:21:46.961 | 99.00th=[24511], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:21:46.961 | 99.99th=[25560] 00:21:46.961 write: IOPS=10.7k, BW=41.9MiB/s (44.0MB/s)(42.0MiB/1002msec); 0 zone resets 00:21:46.961 slat (nsec): min=1635, max=3237.3k, avg=46121.19, stdev=179031.39 00:21:46.961 clat (usec): min=1020, max=24981, avg=6064.75, stdev=3958.63 00:21:46.961 lat (usec): min=1642, max=25327, avg=6110.87, stdev=3984.17 00:21:46.961 clat percentiles (usec): 00:21:46.961 | 1.00th=[ 4359], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5080], 00:21:46.961 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5211], 60.00th=[ 5276], 00:21:46.961 | 70.00th=[ 5342], 80.00th=[ 5407], 90.00th=[ 5604], 95.00th=[ 5997], 00:21:46.961 | 99.00th=[24511], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:21:46.961 | 99.99th=[25035] 00:21:46.961 bw ( KiB/s): min=36864, max=36864, per=36.22%, avg=36864.00, stdev= 0.00, samples=1 00:21:46.961 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=1 00:21:46.961 lat (usec) : 1000=0.01% 00:21:46.961 lat (msec) : 2=0.10%, 4=0.25%, 10=95.77%, 20=0.44%, 50=3.43% 00:21:46.961 cpu : usr=3.80%, sys=6.19%, ctx=1613, majf=0, minf=1 00:21:46.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:46.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:46.961 issued rwts: total=10250,10752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:46.962 job1: (groupid=0, jobs=1): err= 0: pid=2140058: Tue Jun 11 13:49:39 2024 00:21:46.962 read: IOPS=2386, BW=9547KiB/s (9776kB/s)(9604KiB/1006msec) 00:21:46.962 slat (nsec): min=1227, max=3928.6k, avg=203241.99, stdev=479834.78 00:21:46.962 clat (usec): min=5111, max=31083, avg=25695.85, stdev=2386.30 00:21:46.962 lat (usec): min=8431, max=31085, avg=25899.09, stdev=2349.56 00:21:46.962 clat percentiles (usec): 00:21:46.962 | 1.00th=[10945], 5.00th=[23987], 10.00th=[24511], 20.00th=[25560], 00:21:46.962 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:21:46.962 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27132], 00:21:46.962 | 99.00th=[28443], 99.50th=[29492], 99.90th=[30802], 99.95th=[31065], 00:21:46.962 | 99.99th=[31065] 00:21:46.962 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:21:46.962 slat (nsec): min=1688, max=3973.5k, avg=196783.03, stdev=466220.68 00:21:46.962 clat (usec): min=15372, max=29397, avg=25475.78, stdev=1245.48 00:21:46.962 lat (usec): min=15418, max=30010, avg=25672.57, stdev=1213.94 00:21:46.962 clat percentiles (usec): 00:21:46.962 | 1.00th=[20317], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:21:46.962 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:21:46.962 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[26870], 00:21:46.962 | 99.00th=[27132], 99.50th=[27132], 99.90th=[28967], 99.95th=[29492], 00:21:46.962 | 99.99th=[29492] 00:21:46.962 bw ( KiB/s): min= 9440, max=11040, per=10.06%, avg=10240.00, stdev=1131.37, samples=2 00:21:46.962 iops : min= 2360, max= 2760, avg=2560.00, stdev=282.84, samples=2 00:21:46.962 lat (msec) : 10=0.42%, 20=1.37%, 50=98.21% 00:21:46.962 cpu : usr=1.49%, sys=2.79%, ctx=1637, majf=0, minf=1 00:21:46.962 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:46.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:46.962 issued rwts: total=2401,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.962 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:46.962 job2: (groupid=0, jobs=1): err= 0: pid=2140074: Tue Jun 11 13:49:39 2024 00:21:46.962 read: IOPS=2414, BW=9658KiB/s (9890kB/s)(9716KiB/1006msec) 00:21:46.962 slat (nsec): min=1229, max=3767.2k, avg=201829.00, stdev=485627.89 00:21:46.962 clat (usec): min=5171, max=30114, avg=25543.06, stdev=2649.77 00:21:46.962 lat (usec): min=5570, max=31164, avg=25744.89, stdev=2625.86 00:21:46.962 clat percentiles (usec): 00:21:46.962 | 1.00th=[ 8979], 5.00th=[23725], 10.00th=[24249], 20.00th=[25297], 00:21:46.962 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:21:46.962 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27132], 00:21:46.962 | 99.00th=[27919], 99.50th=[28967], 99.90th=[29754], 99.95th=[30016], 00:21:46.962 | 99.99th=[30016] 00:21:46.962 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:21:46.962 slat (nsec): min=1711, max=3984.4k, avg=195815.72, stdev=472129.54 00:21:46.962 clat (usec): min=17626, max=27305, avg=25329.89, stdev=1146.12 00:21:46.962 lat (usec): min=17636, max=29547, avg=25525.70, stdev=1100.65 00:21:46.962 clat percentiles (usec): 00:21:46.962 | 1.00th=[20317], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:21:46.962 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:21:46.962 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26346], 95.00th=[26608], 00:21:46.962 | 99.00th=[26870], 99.50th=[26870], 99.90th=[27132], 99.95th=[27395], 00:21:46.962 | 99.99th=[27395] 00:21:46.962 bw ( KiB/s): min= 9304, max=11176, per=10.06%, avg=10240.00, stdev=1323.70, samples=2 00:21:46.962 iops : min= 2326, max= 2794, avg=2560.00, stdev=330.93, samples=2 00:21:46.962 lat (msec) : 10=0.58%, 20=1.40%, 50=98.02% 00:21:46.962 cpu : usr=1.49%, sys=2.59%, ctx=1691, majf=0, minf=2 00:21:46.962 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:46.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:46.962 issued rwts: total=2429,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.962 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:46.962 job3: (groupid=0, jobs=1): err= 0: pid=2140079: Tue Jun 11 13:49:39 2024 00:21:46.962 read: IOPS=9567, BW=37.4MiB/s (39.2MB/s)(37.6MiB/1006msec) 00:21:46.962 slat (nsec): min=1226, max=2282.8k, avg=50753.38, stdev=180898.55 00:21:46.962 clat (usec): min=4965, max=11787, avg=6696.48, stdev=449.21 00:21:46.962 lat (usec): min=5504, max=12277, avg=6747.24, stdev=467.97 00:21:46.962 clat percentiles (usec): 00:21:46.962 | 1.00th=[ 5866], 5.00th=[ 6128], 10.00th=[ 6194], 20.00th=[ 6456], 00:21:46.962 | 30.00th=[ 6587], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6783], 00:21:46.962 | 70.00th=[ 6849], 80.00th=[ 6849], 90.00th=[ 6980], 95.00th=[ 7242], 00:21:46.962 | 99.00th=[ 7898], 99.50th=[ 8848], 99.90th=[11731], 99.95th=[11731], 00:21:46.962 | 99.99th=[11731] 00:21:46.962 write: IOPS=9669, BW=37.8MiB/s (39.6MB/s)(38.0MiB/1006msec); 0 zone resets 00:21:46.962 slat (nsec): min=1719, max=2741.1k, avg=49930.92, stdev=173246.54 00:21:46.962 clat (usec): min=1127, max=9783, avg=6489.63, stdev=458.98 00:21:46.962 lat (usec): min=1137, max=9793, avg=6539.56, stdev=479.79 00:21:46.962 clat percentiles (usec): 00:21:46.962 | 1.00th=[ 4621], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6259], 00:21:46.962 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6521], 60.00th=[ 6587], 00:21:46.962 | 70.00th=[ 6652], 80.00th=[ 6718], 90.00th=[ 6849], 95.00th=[ 7046], 00:21:46.962 | 99.00th=[ 7373], 99.50th=[ 7701], 99.90th=[ 8586], 99.95th=[ 8586], 00:21:46.962 | 99.99th=[ 9765] 00:21:46.962 bw ( KiB/s): min=36864, max=40960, per=38.23%, avg=38912.00, stdev=2896.31, samples=2 00:21:46.962 iops : min= 9216, max=10240, avg=9728.00, stdev=724.08, samples=2 00:21:46.962 lat (msec) : 2=0.06%, 4=0.20%, 10=99.50%, 20=0.24% 00:21:46.962 cpu : usr=3.28%, sys=7.66%, ctx=1615, majf=0, minf=1 00:21:46.962 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:46.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:46.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:46.962 issued rwts: total=9625,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:46.962 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:46.962 00:21:46.962 Run status group 0 (all jobs): 00:21:46.962 READ: bw=95.9MiB/s (101MB/s), 9547KiB/s-40.0MiB/s (9776kB/s-41.9MB/s), io=96.5MiB (101MB), run=1002-1006msec 00:21:46.962 WRITE: bw=99.4MiB/s (104MB/s), 9.94MiB/s-41.9MiB/s (10.4MB/s-44.0MB/s), io=100MiB (105MB), run=1002-1006msec 00:21:46.962 00:21:46.962 Disk stats (read/write): 00:21:46.962 nvme0n1: ios=8633/8704, merge=0/0, ticks=16258/16319, in_queue=32577, util=85.97% 00:21:46.962 nvme0n2: ios=2048/2141, merge=0/0, ticks=13247/13367, in_queue=26614, util=85.98% 00:21:46.962 nvme0n3: ios=2048/2158, merge=0/0, ticks=13239/13445, in_queue=26684, util=88.60% 00:21:46.962 nvme0n4: ios=8108/8192, merge=0/0, ticks=52540/51605, in_queue=104145, util=89.55% 00:21:46.962 13:49:39 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:46.962 [global] 00:21:46.962 thread=1 00:21:46.962 invalidate=1 00:21:46.962 rw=randwrite 00:21:46.962 time_based=1 00:21:46.962 runtime=1 00:21:46.962 ioengine=libaio 00:21:46.962 direct=1 00:21:46.962 bs=4096 00:21:46.962 iodepth=128 00:21:46.962 norandommap=0 00:21:46.962 numjobs=1 00:21:46.962 00:21:46.962 verify_dump=1 00:21:46.962 verify_backlog=512 00:21:46.962 verify_state_save=0 00:21:46.962 do_verify=1 00:21:46.962 verify=crc32c-intel 00:21:46.963 [job0] 00:21:46.963 filename=/dev/nvme0n1 00:21:46.963 [job1] 00:21:46.963 filename=/dev/nvme0n2 00:21:46.963 [job2] 00:21:46.963 filename=/dev/nvme0n3 00:21:46.963 [job3] 00:21:46.963 filename=/dev/nvme0n4 00:21:46.963 Could not set queue depth (nvme0n1) 00:21:46.963 Could not set queue depth (nvme0n2) 00:21:46.963 Could not set queue depth (nvme0n3) 00:21:46.963 Could not set queue depth (nvme0n4) 00:21:47.222 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:47.222 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:47.222 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:47.222 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:47.222 fio-3.35 00:21:47.222 Starting 4 threads 00:21:48.604 00:21:48.604 job0: (groupid=0, jobs=1): err= 0: pid=2140562: Tue Jun 11 13:49:41 2024 00:21:48.604 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(45.6MiB/1004msec) 00:21:48.604 slat (nsec): min=1198, max=1594.9k, avg=42624.96, stdev=157593.59 00:21:48.604 clat (usec): min=3304, max=8500, avg=5560.47, stdev=669.61 00:21:48.604 lat (usec): min=3948, max=8506, avg=5603.09, stdev=672.94 00:21:48.604 clat percentiles (usec): 00:21:48.604 | 1.00th=[ 4359], 5.00th=[ 4686], 10.00th=[ 4817], 20.00th=[ 5145], 00:21:48.604 | 30.00th=[ 5276], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5473], 00:21:48.604 | 70.00th=[ 5604], 80.00th=[ 6063], 90.00th=[ 6652], 95.00th=[ 6980], 00:21:48.604 | 99.00th=[ 7373], 99.50th=[ 7635], 99.90th=[ 8455], 99.95th=[ 8455], 00:21:48.604 | 99.99th=[ 8455] 00:21:48.604 write: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(46.0MiB/1004msec); 0 zone resets 00:21:48.604 slat (nsec): min=1656, max=1453.2k, avg=40883.91, stdev=151363.22 00:21:48.604 clat (usec): min=1095, max=8556, avg=5309.25, stdev=693.06 00:21:48.604 lat (usec): min=1104, max=8562, avg=5350.13, stdev=697.91 00:21:48.604 clat percentiles (usec): 00:21:48.604 | 1.00th=[ 4146], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4883], 00:21:48.604 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5276], 00:21:48.604 | 70.00th=[ 5407], 80.00th=[ 5866], 90.00th=[ 6456], 95.00th=[ 6652], 00:21:48.604 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[ 7767], 99.95th=[ 7963], 00:21:48.604 | 99.99th=[ 8225] 00:21:48.604 bw ( KiB/s): min=45056, max=49152, per=28.99%, avg=47104.00, stdev=2896.31, samples=2 00:21:48.604 iops : min=11264, max=12288, avg=11776.00, stdev=724.08, samples=2 00:21:48.604 lat (msec) : 2=0.06%, 4=0.35%, 10=99.59% 00:21:48.604 cpu : usr=3.99%, sys=5.48%, ctx=1549, majf=0, minf=1 00:21:48.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:48.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:48.604 issued rwts: total=11663,11776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:48.604 job1: (groupid=0, jobs=1): err= 0: pid=2140572: Tue Jun 11 13:49:41 2024 00:21:48.604 read: IOPS=12.3k, BW=48.0MiB/s (50.3MB/s)(48.0MiB/1001msec) 00:21:48.604 slat (nsec): min=1152, max=2028.3k, avg=39283.00, stdev=144213.66 00:21:48.604 clat (usec): min=3804, max=7679, avg=5087.70, stdev=401.87 00:21:48.604 lat (usec): min=3899, max=7687, avg=5126.98, stdev=404.85 00:21:48.604 clat percentiles (usec): 00:21:48.605 | 1.00th=[ 4178], 5.00th=[ 4490], 10.00th=[ 4555], 20.00th=[ 4686], 00:21:48.605 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5211], 00:21:48.605 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5735], 00:21:48.605 | 99.00th=[ 5997], 99.50th=[ 6063], 99.90th=[ 7111], 99.95th=[ 7111], 00:21:48.605 | 99.99th=[ 7111] 00:21:48.605 write: IOPS=12.7k, BW=49.6MiB/s (52.0MB/s)(49.7MiB/1001msec); 0 zone resets 00:21:48.605 slat (nsec): min=1587, max=2421.0k, avg=38952.22, stdev=148461.41 00:21:48.605 clat (usec): min=775, max=11215, avg=5061.26, stdev=1055.13 00:21:48.605 lat (usec): min=1035, max=11217, avg=5100.21, stdev=1060.91 00:21:48.605 clat percentiles (usec): 00:21:48.605 | 1.00th=[ 3916], 5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 4490], 00:21:48.605 | 30.00th=[ 4686], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5080], 00:21:48.605 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5473], 95.00th=[ 5735], 00:21:48.605 | 99.00th=[10945], 99.50th=[11076], 99.90th=[11207], 99.95th=[11207], 00:21:48.605 | 99.99th=[11207] 00:21:48.605 bw ( KiB/s): min=49152, max=51592, per=31.00%, avg=50372.00, stdev=1725.34, samples=2 00:21:48.605 iops : min=12288, max=12898, avg=12593.00, stdev=431.34, samples=2 00:21:48.605 lat (usec) : 1000=0.01% 00:21:48.605 lat (msec) : 2=0.12%, 4=0.55%, 10=98.15%, 20=1.17% 00:21:48.605 cpu : usr=4.10%, sys=4.90%, ctx=1866, majf=0, minf=1 00:21:48.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:48.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:48.605 issued rwts: total=12288,12720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:48.605 job2: (groupid=0, jobs=1): err= 0: pid=2140588: Tue Jun 11 13:49:41 2024 00:21:48.605 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:21:48.605 slat (nsec): min=1174, max=6586.8k, avg=68967.85, stdev=324948.16 00:21:48.605 clat (usec): min=4885, max=27516, avg=8562.26, stdev=4797.56 00:21:48.605 lat (usec): min=5024, max=30181, avg=8631.23, stdev=4827.12 00:21:48.605 clat percentiles (usec): 00:21:48.605 | 1.00th=[ 5276], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5866], 00:21:48.605 | 30.00th=[ 6128], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6718], 00:21:48.605 | 70.00th=[ 6980], 80.00th=[10028], 90.00th=[17171], 95.00th=[20055], 00:21:48.605 | 99.00th=[24773], 99.50th=[26346], 99.90th=[27132], 99.95th=[27132], 00:21:48.605 | 99.99th=[27395] 00:21:48.605 write: IOPS=7653, BW=29.9MiB/s (31.3MB/s)(29.9MiB/1001msec); 0 zone resets 00:21:48.605 slat (nsec): min=1632, max=6317.6k, avg=63692.41, stdev=294410.51 00:21:48.605 clat (usec): min=458, max=26620, avg=8504.30, stdev=4678.01 00:21:48.605 lat (usec): min=1040, max=27748, avg=8567.99, stdev=4702.75 00:21:48.605 clat percentiles (usec): 00:21:48.605 | 1.00th=[ 4293], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5866], 00:21:48.605 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6390], 60.00th=[ 6587], 00:21:48.605 | 70.00th=[ 7046], 80.00th=[10945], 90.00th=[16909], 95.00th=[19530], 00:21:48.605 | 99.00th=[23462], 99.50th=[24773], 99.90th=[26608], 99.95th=[26608], 00:21:48.605 | 99.99th=[26608] 00:21:48.605 bw ( KiB/s): min=22968, max=37304, per=18.55%, avg=30136.00, stdev=10137.08, samples=2 00:21:48.605 iops : min= 5742, max= 9326, avg=7534.00, stdev=2534.27, samples=2 00:21:48.605 lat (usec) : 500=0.01% 00:21:48.605 lat (msec) : 2=0.20%, 4=0.24%, 10=78.44%, 20=16.65%, 50=4.46% 00:21:48.605 cpu : usr=1.70%, sys=3.90%, ctx=1093, majf=0, minf=1 00:21:48.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:48.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:48.605 issued rwts: total=7168,7661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:48.605 job3: (groupid=0, jobs=1): err= 0: pid=2140594: Tue Jun 11 13:49:41 2024 00:21:48.605 read: IOPS=8287, BW=32.4MiB/s (33.9MB/s)(32.6MiB/1006msec) 00:21:48.605 slat (nsec): min=1204, max=4100.2k, avg=51092.54, stdev=248097.79 00:21:48.605 clat (usec): min=245, max=23233, avg=7727.37, stdev=4028.33 00:21:48.605 lat (usec): min=388, max=23241, avg=7778.46, stdev=4065.65 00:21:48.605 clat percentiles (usec): 00:21:48.605 | 1.00th=[ 1483], 5.00th=[ 2868], 10.00th=[ 3654], 20.00th=[ 4817], 00:21:48.605 | 30.00th=[ 5932], 40.00th=[ 6718], 50.00th=[ 7308], 60.00th=[ 7963], 00:21:48.605 | 70.00th=[ 8717], 80.00th=[ 8848], 90.00th=[ 9765], 95.00th=[19268], 00:21:48.605 | 99.00th=[20317], 99.50th=[21365], 99.90th=[22676], 99.95th=[23200], 00:21:48.605 | 99.99th=[23200] 00:21:48.605 write: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1006msec); 0 zone resets 00:21:48.605 slat (nsec): min=1710, max=3540.7k, avg=44068.95, stdev=210911.12 00:21:48.605 clat (usec): min=173, max=22983, avg=7280.51, stdev=3294.81 00:21:48.605 lat (usec): min=269, max=22990, avg=7324.58, stdev=3318.54 00:21:48.605 clat percentiles (usec): 00:21:48.605 | 1.00th=[ 1172], 5.00th=[ 2769], 10.00th=[ 3392], 20.00th=[ 4817], 00:21:48.605 | 30.00th=[ 5997], 40.00th=[ 6849], 50.00th=[ 7439], 60.00th=[ 8029], 00:21:48.605 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[12911], 00:21:48.605 | 99.00th=[20055], 99.50th=[20055], 99.90th=[21365], 99.95th=[22414], 00:21:48.605 | 99.99th=[22938] 00:21:48.605 bw ( KiB/s): min=28672, max=40960, per=21.43%, avg=34816.00, stdev=8688.93, samples=2 00:21:48.605 iops : min= 7168, max=10240, avg=8704.00, stdev=2172.23, samples=2 00:21:48.605 lat (usec) : 250=0.03%, 500=0.09%, 750=0.23%, 1000=0.13% 00:21:48.605 lat (msec) : 2=1.59%, 4=11.64%, 10=77.99%, 20=6.81%, 50=1.49% 00:21:48.605 cpu : usr=4.08%, sys=7.76%, ctx=1009, majf=0, minf=1 00:21:48.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:21:48.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:48.605 issued rwts: total=8337,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:48.605 00:21:48.605 Run status group 0 (all jobs): 00:21:48.605 READ: bw=153MiB/s (161MB/s), 28.0MiB/s-48.0MiB/s (29.3MB/s-50.3MB/s), io=154MiB (162MB), run=1001-1006msec 00:21:48.605 WRITE: bw=159MiB/s (166MB/s), 29.9MiB/s-49.6MiB/s (31.3MB/s-52.0MB/s), io=160MiB (167MB), run=1001-1006msec 00:21:48.605 00:21:48.605 Disk stats (read/write): 00:21:48.605 nvme0n1: ios=9778/9841, merge=0/0, ticks=53201/51225, in_queue=104426, util=85.77% 00:21:48.605 nvme0n2: ios=10465/10752, merge=0/0, ticks=12667/12594, in_queue=25261, util=86.08% 00:21:48.605 nvme0n3: ios=5842/6144, merge=0/0, ticks=13542/13463, in_queue=27005, util=88.60% 00:21:48.605 nvme0n4: ios=7168/7294, merge=0/0, ticks=44880/44647, in_queue=89527, util=89.44% 00:21:48.605 13:49:41 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:21:48.605 13:49:41 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2140872 00:21:48.605 13:49:41 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:21:48.605 13:49:41 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:48.605 [global] 00:21:48.605 thread=1 00:21:48.605 invalidate=1 00:21:48.605 rw=read 00:21:48.605 time_based=1 00:21:48.605 runtime=10 00:21:48.605 ioengine=libaio 00:21:48.605 direct=1 00:21:48.605 bs=4096 00:21:48.605 iodepth=1 00:21:48.605 norandommap=1 00:21:48.605 numjobs=1 00:21:48.605 00:21:48.605 [job0] 00:21:48.605 filename=/dev/nvme0n1 00:21:48.605 [job1] 00:21:48.605 filename=/dev/nvme0n2 00:21:48.605 [job2] 00:21:48.605 filename=/dev/nvme0n3 00:21:48.605 [job3] 00:21:48.605 filename=/dev/nvme0n4 00:21:48.605 Could not set queue depth (nvme0n1) 00:21:48.605 Could not set queue depth (nvme0n2) 00:21:48.605 Could not set queue depth (nvme0n3) 00:21:48.605 Could not set queue depth (nvme0n4) 00:21:48.866 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.866 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.866 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.866 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.866 fio-3.35 00:21:48.866 Starting 4 threads 00:21:51.411 13:49:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:51.411 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=88064000, buflen=4096 00:21:51.411 fio: pid=2141096, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:51.411 13:49:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:51.672 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=79745024, buflen=4096 00:21:51.672 fio: pid=2141091, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:51.672 13:49:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:51.672 13:49:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:51.672 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=16609280, buflen=4096 00:21:51.672 fio: pid=2141080, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:51.935 13:49:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:51.935 13:49:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:51.935 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10932224, buflen=4096 00:21:51.935 fio: pid=2141087, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:51.935 13:49:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:51.935 13:49:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:51.935 00:21:51.935 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2141080: Tue Jun 11 13:49:44 2024 00:21:51.935 read: IOPS=12.7k, BW=49.5MiB/s (52.0MB/s)(144MiB/2903msec) 00:21:51.935 slat (usec): min=4, max=8325, avg= 7.95, stdev=82.25 00:21:51.935 clat (usec): min=31, max=925, avg=69.63, stdev=43.91 00:21:51.935 lat (usec): min=49, max=8388, avg=77.57, stdev=95.67 00:21:51.935 clat percentiles (usec): 00:21:51.935 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:21:51.935 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 60], 00:21:51.935 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 85], 95.00th=[ 128], 00:21:51.935 | 99.00th=[ 273], 99.50th=[ 318], 99.90th=[ 392], 99.95th=[ 404], 00:21:51.935 | 99.99th=[ 449] 00:21:51.935 bw ( KiB/s): min=24216, max=62424, per=41.44%, avg=52260.80, stdev=16466.36, samples=5 00:21:51.935 iops : min= 6054, max=15606, avg=13065.20, stdev=4116.59, samples=5 00:21:51.935 lat (usec) : 50=6.41%, 100=86.35%, 250=5.27%, 500=1.97%, 1000=0.01% 00:21:51.935 cpu : usr=5.17%, sys=14.78%, ctx=36830, majf=0, minf=1 00:21:51.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.935 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.935 issued rwts: total=36824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:51.935 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2141087: Tue Jun 11 13:49:44 2024 00:21:51.935 read: IOPS=6202, BW=24.2MiB/s (25.4MB/s)(74.4MiB/3072msec) 00:21:51.935 slat (usec): min=5, max=13805, avg=16.78, stdev=192.83 00:21:51.935 clat (usec): min=44, max=502, avg=141.68, stdev=88.78 00:21:51.935 lat (usec): min=50, max=14026, avg=158.46, stdev=215.54 00:21:51.935 clat percentiles (usec): 00:21:51.935 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 65], 00:21:51.935 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 92], 60.00th=[ 182], 00:21:51.935 | 70.00th=[ 200], 80.00th=[ 233], 90.00th=[ 265], 95.00th=[ 302], 00:21:51.935 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 429], 99.95th=[ 441], 00:21:51.935 | 99.99th=[ 478] 00:21:51.935 bw ( KiB/s): min=16800, max=24256, per=16.92%, avg=21342.40, stdev=3038.57, samples=5 00:21:51.935 iops : min= 4200, max= 6064, avg=5335.60, stdev=759.64, samples=5 00:21:51.935 lat (usec) : 50=3.24%, 100=48.42%, 250=34.98%, 500=13.36%, 750=0.01% 00:21:51.935 cpu : usr=4.79%, sys=12.99%, ctx=19061, majf=0, minf=1 00:21:51.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.935 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.935 issued rwts: total=19054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:51.935 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2141091: Tue Jun 11 13:49:44 2024 00:21:51.935 read: IOPS=7108, BW=27.8MiB/s (29.1MB/s)(76.1MiB/2739msec) 00:21:51.935 slat (usec): min=4, max=10515, avg=11.92, stdev=94.65 00:21:51.935 clat (usec): min=52, max=952, avg=126.70, stdev=73.75 00:21:51.935 lat (usec): min=58, max=10604, avg=138.61, stdev=123.45 00:21:51.935 clat percentiles (usec): 00:21:51.935 | 1.00th=[ 61], 5.00th=[ 72], 10.00th=[ 77], 20.00th=[ 80], 00:21:51.935 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 91], 60.00th=[ 97], 00:21:51.935 | 70.00th=[ 110], 80.00th=[ 208], 90.00th=[ 245], 95.00th=[ 269], 00:21:51.935 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 424], 99.95th=[ 441], 00:21:51.935 | 99.99th=[ 889] 00:21:51.935 bw ( KiB/s): min=22008, max=35104, per=22.48%, avg=28353.60, stdev=5287.24, samples=5 00:21:51.935 iops : min= 5502, max= 8776, avg=7088.40, stdev=1321.81, samples=5 00:21:51.935 lat (usec) : 100=62.92%, 250=28.10%, 500=8.96%, 1000=0.01% 00:21:51.935 cpu : usr=3.98%, sys=12.27%, ctx=19474, majf=0, minf=1 00:21:51.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.935 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.935 issued rwts: total=19470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:51.935 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2141096: Tue Jun 11 13:49:44 2024 00:21:51.935 read: IOPS=8340, BW=32.6MiB/s (34.2MB/s)(84.0MiB/2578msec) 00:21:51.935 slat (nsec): min=5279, max=56938, avg=9161.31, stdev=6030.52 00:21:51.935 clat (usec): min=43, max=771, avg=109.06, stdev=62.80 00:21:51.935 lat (usec): min=57, max=787, avg=118.22, stdev=65.73 00:21:51.935 clat percentiles (usec): 00:21:51.935 | 1.00th=[ 56], 5.00th=[ 62], 10.00th=[ 66], 20.00th=[ 75], 00:21:51.935 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 91], 00:21:51.935 | 70.00th=[ 98], 80.00th=[ 111], 90.00th=[ 229], 95.00th=[ 258], 00:21:51.935 | 99.00th=[ 322], 99.50th=[ 363], 99.90th=[ 412], 99.95th=[ 437], 00:21:51.935 | 99.99th=[ 482] 00:21:51.935 bw ( KiB/s): min=23152, max=39832, per=26.91%, avg=33929.60, stdev=6669.13, samples=5 00:21:51.935 iops : min= 5788, max= 9958, avg=8482.40, stdev=1667.28, samples=5 00:21:51.935 lat (usec) : 50=0.06%, 100=72.63%, 250=21.32%, 500=5.97%, 1000=0.01% 00:21:51.935 cpu : usr=3.84%, sys=11.02%, ctx=21503, majf=0, minf=2 00:21:51.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:51.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.935 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.935 issued rwts: total=21501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:51.936 00:21:51.936 Run status group 0 (all jobs): 00:21:51.936 READ: bw=123MiB/s (129MB/s), 24.2MiB/s-49.5MiB/s (25.4MB/s-52.0MB/s), io=378MiB (397MB), run=2578-3072msec 00:21:51.936 00:21:51.936 Disk stats (read/write): 00:21:51.936 nvme0n1: ios=36093/0, merge=0/0, ticks=2046/0, in_queue=2046, util=94.13% 00:21:51.936 nvme0n2: ios=15932/0, merge=0/0, ticks=1779/0, in_queue=1779, util=94.23% 00:21:51.936 nvme0n3: ios=18244/0, merge=0/0, ticks=1706/0, in_queue=1706, util=96.12% 00:21:51.936 nvme0n4: ios=20402/0, merge=0/0, ticks=1841/0, in_queue=1841, util=96.03% 00:21:52.197 13:49:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:52.197 13:49:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:52.459 13:49:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:52.459 13:49:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:52.459 13:49:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:52.459 13:49:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:52.720 13:49:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:52.720 13:49:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:52.981 13:49:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:21:52.981 13:49:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 2140872 00:21:52.981 13:49:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:21:52.981 13:49:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:54.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:54.368 13:49:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:54.368 13:49:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:21:54.368 13:49:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:21:54.368 13:49:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:54.368 13:49:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:21:54.368 13:49:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:54.368 13:49:46 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:21:54.368 13:49:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:54.368 13:49:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:54.368 nvmf hotplug test: fio failed as expected 00:21:54.368 13:49:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:54.368 rmmod nvme_rdma 00:21:54.368 rmmod nvme_fabrics 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2137368 ']' 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2137368 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 2137368 ']' 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 2137368 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2137368 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2137368' 00:21:54.368 killing process with pid 2137368 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 2137368 00:21:54.368 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 2137368 00:21:54.628 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:54.628 13:49:47 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:54.628 00:21:54.628 real 0m27.384s 00:21:54.628 user 2m32.178s 00:21:54.628 sys 0m10.168s 00:21:54.628 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:54.628 13:49:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.628 ************************************ 00:21:54.628 END TEST nvmf_fio_target 00:21:54.628 ************************************ 00:21:54.628 13:49:47 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:21:54.628 13:49:47 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:54.628 13:49:47 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:54.628 13:49:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:54.628 ************************************ 00:21:54.628 START TEST nvmf_bdevio 00:21:54.628 ************************************ 00:21:54.628 13:49:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:21:54.890 * Looking for test storage... 00:21:54.890 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.890 13:49:47 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.035 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:03.036 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:03.036 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:03.036 Found net devices under 0000:98:00.0: mlx_0_0 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:03.036 Found net devices under 0000:98:00.1: mlx_0_1 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:03.036 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:03.036 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:03.036 altname enp152s0f0np0 00:22:03.036 altname ens817f0np0 00:22:03.036 inet 192.168.100.8/24 scope global mlx_0_0 00:22:03.036 valid_lft forever preferred_lft forever 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:03.036 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:03.036 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:03.036 altname enp152s0f1np1 00:22:03.036 altname ens817f1np1 00:22:03.036 inet 192.168.100.9/24 scope global mlx_0_1 00:22:03.036 valid_lft forever preferred_lft forever 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.036 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:03.037 192.168.100.9' 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:03.037 192.168.100.9' 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:03.037 192.168.100.9' 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2146137 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2146137 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 2146137 ']' 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:03.037 13:49:54 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:03.037 [2024-06-11 13:49:54.817556] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:22:03.037 [2024-06-11 13:49:54.817622] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.037 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.037 [2024-06-11 13:49:54.900721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.037 [2024-06-11 13:49:54.992941] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.037 [2024-06-11 13:49:54.992996] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.037 [2024-06-11 13:49:54.993004] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.037 [2024-06-11 13:49:54.993012] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.037 [2024-06-11 13:49:54.993025] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.037 [2024-06-11 13:49:54.993190] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:22:03.037 [2024-06-11 13:49:54.993445] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:22:03.037 [2024-06-11 13:49:54.993608] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:22:03.037 [2024-06-11 13:49:54.993609] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:03.037 [2024-06-11 13:49:55.704584] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1acc700/0x1ad0bf0) succeed. 00:22:03.037 [2024-06-11 13:49:55.720090] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1acdd40/0x1b12280) succeed. 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:03.037 Malloc0 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:03.037 [2024-06-11 13:49:55.930578] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.037 { 00:22:03.037 "params": { 00:22:03.037 "name": "Nvme$subsystem", 00:22:03.037 "trtype": "$TEST_TRANSPORT", 00:22:03.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.037 "adrfam": "ipv4", 00:22:03.037 "trsvcid": "$NVMF_PORT", 00:22:03.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.037 "hdgst": ${hdgst:-false}, 00:22:03.037 "ddgst": ${ddgst:-false} 00:22:03.037 }, 00:22:03.037 "method": "bdev_nvme_attach_controller" 00:22:03.037 } 00:22:03.037 EOF 00:22:03.037 )") 00:22:03.037 13:49:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:22:03.299 13:49:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:22:03.299 13:49:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:22:03.299 13:49:55 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:03.299 "params": { 00:22:03.299 "name": "Nvme1", 00:22:03.299 "trtype": "rdma", 00:22:03.299 "traddr": "192.168.100.8", 00:22:03.299 "adrfam": "ipv4", 00:22:03.299 "trsvcid": "4420", 00:22:03.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.299 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.299 "hdgst": false, 00:22:03.299 "ddgst": false 00:22:03.299 }, 00:22:03.299 "method": "bdev_nvme_attach_controller" 00:22:03.299 }' 00:22:03.299 [2024-06-11 13:49:55.983857] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:22:03.299 [2024-06-11 13:49:55.983927] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146361 ] 00:22:03.299 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.299 [2024-06-11 13:49:56.051997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:03.299 [2024-06-11 13:49:56.129057] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.299 [2024-06-11 13:49:56.129122] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.299 [2024-06-11 13:49:56.129124] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.560 I/O targets: 00:22:03.560 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:03.560 00:22:03.560 00:22:03.560 CUnit - A unit testing framework for C - Version 2.1-3 00:22:03.560 http://cunit.sourceforge.net/ 00:22:03.560 00:22:03.560 00:22:03.560 Suite: bdevio tests on: Nvme1n1 00:22:03.560 Test: blockdev write read block ...passed 00:22:03.560 Test: blockdev write zeroes read block ...passed 00:22:03.560 Test: blockdev write zeroes read no split ...passed 00:22:03.560 Test: blockdev write zeroes read split ...passed 00:22:03.560 Test: blockdev write zeroes read split partial ...passed 00:22:03.560 Test: blockdev reset ...[2024-06-11 13:49:56.338410] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:03.560 [2024-06-11 13:49:56.367066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:03.560 [2024-06-11 13:49:56.408869] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:03.560 passed 00:22:03.560 Test: blockdev write read 8 blocks ...passed 00:22:03.560 Test: blockdev write read size > 128k ...passed 00:22:03.560 Test: blockdev write read invalid size ...passed 00:22:03.560 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:03.560 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:03.560 Test: blockdev write read max offset ...passed 00:22:03.560 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:03.560 Test: blockdev writev readv 8 blocks ...passed 00:22:03.560 Test: blockdev writev readv 30 x 1block ...passed 00:22:03.560 Test: blockdev writev readv block ...passed 00:22:03.560 Test: blockdev writev readv size > 128k ...passed 00:22:03.560 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:03.560 Test: blockdev comparev and writev ...[2024-06-11 13:49:56.414532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.560 [2024-06-11 13:49:56.414557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:03.560 [2024-06-11 13:49:56.414565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.560 [2024-06-11 13:49:56.414570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:03.560 [2024-06-11 13:49:56.414766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.560 [2024-06-11 13:49:56.414773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:03.560 [2024-06-11 13:49:56.414780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.560 [2024-06-11 13:49:56.414785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:03.560 [2024-06-11 13:49:56.414932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.560 [2024-06-11 13:49:56.414941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:03.560 [2024-06-11 13:49:56.414947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.560 [2024-06-11 13:49:56.414955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:03.560 [2024-06-11 13:49:56.415106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.560 [2024-06-11 13:49:56.415114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:03.560 [2024-06-11 13:49:56.415121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:03.560 [2024-06-11 13:49:56.415126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:03.560 passed 00:22:03.560 Test: blockdev nvme passthru rw ...passed 00:22:03.560 Test: blockdev nvme passthru vendor specific ...[2024-06-11 13:49:56.415984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:03.560 [2024-06-11 13:49:56.415992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:03.560 [2024-06-11 13:49:56.416039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:03.560 [2024-06-11 13:49:56.416044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:03.560 [2024-06-11 13:49:56.416083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:03.560 [2024-06-11 13:49:56.416089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:03.560 [2024-06-11 13:49:56.416134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:03.560 [2024-06-11 13:49:56.416140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:03.560 passed 00:22:03.560 Test: blockdev nvme admin passthru ...passed 00:22:03.560 Test: blockdev copy ...passed 00:22:03.560 00:22:03.560 Run Summary: Type Total Ran Passed Failed Inactive 00:22:03.560 suites 1 1 n/a 0 0 00:22:03.560 tests 23 23 23 0 0 00:22:03.560 asserts 152 152 152 0 n/a 00:22:03.560 00:22:03.560 Elapsed time = 0.231 seconds 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:03.821 rmmod nvme_rdma 00:22:03.821 rmmod nvme_fabrics 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2146137 ']' 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2146137 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 2146137 ']' 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 2146137 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2146137 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2146137' 00:22:03.821 killing process with pid 2146137 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 2146137 00:22:03.821 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 2146137 00:22:04.082 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:04.082 13:49:56 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:04.082 00:22:04.082 real 0m9.484s 00:22:04.082 user 0m10.973s 00:22:04.082 sys 0m5.889s 00:22:04.082 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:04.082 13:49:56 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:04.082 ************************************ 00:22:04.082 END TEST nvmf_bdevio 00:22:04.082 ************************************ 00:22:04.344 13:49:57 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:22:04.344 13:49:57 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:04.344 13:49:57 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:04.344 13:49:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:04.344 ************************************ 00:22:04.344 START TEST nvmf_auth_target 00:22:04.344 ************************************ 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:22:04.344 * Looking for test storage... 00:22:04.344 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:04.344 13:49:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.485 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:12.486 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:12.486 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:12.486 Found net devices under 0000:98:00.0: mlx_0_0 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:12.486 Found net devices under 0000:98:00.1: mlx_0_1 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:12.486 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:12.486 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:12.486 altname enp152s0f0np0 00:22:12.486 altname ens817f0np0 00:22:12.486 inet 192.168.100.8/24 scope global mlx_0_0 00:22:12.486 valid_lft forever preferred_lft forever 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:12.486 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:12.486 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:12.486 altname enp152s0f1np1 00:22:12.486 altname ens817f1np1 00:22:12.486 inet 192.168.100.9/24 scope global mlx_0_1 00:22:12.486 valid_lft forever preferred_lft forever 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:12.486 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:12.487 192.168.100.9' 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:12.487 192.168.100.9' 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:12.487 192.168.100.9' 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2150219 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2150219 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2150219 ']' 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:12.487 13:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2150562 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1baf1d72f26c199218b6ea82e3ff03f4ac77ff38e2d47b44 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FxQ 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1baf1d72f26c199218b6ea82e3ff03f4ac77ff38e2d47b44 0 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1baf1d72f26c199218b6ea82e3ff03f4ac77ff38e2d47b44 0 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1baf1d72f26c199218b6ea82e3ff03f4ac77ff38e2d47b44 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FxQ 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FxQ 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.FxQ 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6ec7b503490ae03553b712d6480d39a5016e643c7653cef7ae5332b34810e82b 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DAF 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6ec7b503490ae03553b712d6480d39a5016e643c7653cef7ae5332b34810e82b 3 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6ec7b503490ae03553b712d6480d39a5016e643c7653cef7ae5332b34810e82b 3 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6ec7b503490ae03553b712d6480d39a5016e643c7653cef7ae5332b34810e82b 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DAF 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DAF 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.DAF 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6434e6afe123e97f35631523c038b7ce 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pFV 00:22:12.487 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6434e6afe123e97f35631523c038b7ce 1 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6434e6afe123e97f35631523c038b7ce 1 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6434e6afe123e97f35631523c038b7ce 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pFV 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pFV 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.pFV 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=98996d370f328b026ea62533af7715dcc5cab73ccf12acf9 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4Mc 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 98996d370f328b026ea62533af7715dcc5cab73ccf12acf9 2 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 98996d370f328b026ea62533af7715dcc5cab73ccf12acf9 2 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=98996d370f328b026ea62533af7715dcc5cab73ccf12acf9 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:12.488 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4Mc 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4Mc 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.4Mc 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b6fb1f15220840c131b400a1322fad1d156268373296c20e 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0Ux 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b6fb1f15220840c131b400a1322fad1d156268373296c20e 2 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b6fb1f15220840c131b400a1322fad1d156268373296c20e 2 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b6fb1f15220840c131b400a1322fad1d156268373296c20e 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0Ux 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0Ux 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.0Ux 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=afa254a114308e2bdbf1e8443f0ff6b8 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4bz 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key afa254a114308e2bdbf1e8443f0ff6b8 1 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 afa254a114308e2bdbf1e8443f0ff6b8 1 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=afa254a114308e2bdbf1e8443f0ff6b8 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4bz 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4bz 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.4bz 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:22:12.748 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2e678cfa8d50a443239b63215931d44a17285d22efceeacb33217ebca65f818a 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.QCr 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2e678cfa8d50a443239b63215931d44a17285d22efceeacb33217ebca65f818a 3 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2e678cfa8d50a443239b63215931d44a17285d22efceeacb33217ebca65f818a 3 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2e678cfa8d50a443239b63215931d44a17285d22efceeacb33217ebca65f818a 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.QCr 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.QCr 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.QCr 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2150219 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2150219 ']' 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:12.749 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.008 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:13.008 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:22:13.008 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2150562 /var/tmp/host.sock 00:22:13.009 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2150562 ']' 00:22:13.009 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:22:13.009 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:13.009 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:13.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:13.009 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:13.009 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.269 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:13.269 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:22:13.269 13:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:22:13.269 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.269 13:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.269 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.269 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:13.269 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FxQ 00:22:13.269 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.269 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.269 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.269 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.FxQ 00:22:13.269 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.FxQ 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.DAF ]] 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DAF 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DAF 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DAF 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.pFV 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.pFV 00:22:13.529 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.pFV 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.4Mc ]] 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Mc 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Mc 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Mc 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0Ux 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.0Ux 00:22:13.791 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.0Ux 00:22:14.051 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.4bz ]] 00:22:14.051 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4bz 00:22:14.051 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.051 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.051 13:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.051 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4bz 00:22:14.051 13:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4bz 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.QCr 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.QCr 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.QCr 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:14.311 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.572 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.860 00:22:14.860 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.860 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.861 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.125 { 00:22:15.125 "cntlid": 1, 00:22:15.125 "qid": 0, 00:22:15.125 "state": "enabled", 00:22:15.125 "listen_address": { 00:22:15.125 "trtype": "RDMA", 00:22:15.125 "adrfam": "IPv4", 00:22:15.125 "traddr": "192.168.100.8", 00:22:15.125 "trsvcid": "4420" 00:22:15.125 }, 00:22:15.125 "peer_address": { 00:22:15.125 "trtype": "RDMA", 00:22:15.125 "adrfam": "IPv4", 00:22:15.125 "traddr": "192.168.100.8", 00:22:15.125 "trsvcid": "54301" 00:22:15.125 }, 00:22:15.125 "auth": { 00:22:15.125 "state": "completed", 00:22:15.125 "digest": "sha256", 00:22:15.125 "dhgroup": "null" 00:22:15.125 } 00:22:15.125 } 00:22:15.125 ]' 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.125 13:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.385 13:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:22:16.326 13:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.326 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:16.326 13:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.326 13:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.326 13:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.326 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.326 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:16.326 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.587 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.587 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.848 { 00:22:16.848 "cntlid": 3, 00:22:16.848 "qid": 0, 00:22:16.848 "state": "enabled", 00:22:16.848 "listen_address": { 00:22:16.848 "trtype": "RDMA", 00:22:16.848 "adrfam": "IPv4", 00:22:16.848 "traddr": "192.168.100.8", 00:22:16.848 "trsvcid": "4420" 00:22:16.848 }, 00:22:16.848 "peer_address": { 00:22:16.848 "trtype": "RDMA", 00:22:16.848 "adrfam": "IPv4", 00:22:16.848 "traddr": "192.168.100.8", 00:22:16.848 "trsvcid": "36333" 00:22:16.848 }, 00:22:16.848 "auth": { 00:22:16.848 "state": "completed", 00:22:16.848 "digest": "sha256", 00:22:16.848 "dhgroup": "null" 00:22:16.848 } 00:22:16.848 } 00:22:16.848 ]' 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:16.848 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.109 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.109 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.109 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.109 13:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:22:18.051 13:50:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.051 13:50:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.051 13:50:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:18.051 13:50:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.051 13:50:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:18.051 13:50:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.051 13:50:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:18.051 13:50:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.312 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.573 00:22:18.573 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.573 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.573 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.834 { 00:22:18.834 "cntlid": 5, 00:22:18.834 "qid": 0, 00:22:18.834 "state": "enabled", 00:22:18.834 "listen_address": { 00:22:18.834 "trtype": "RDMA", 00:22:18.834 "adrfam": "IPv4", 00:22:18.834 "traddr": "192.168.100.8", 00:22:18.834 "trsvcid": "4420" 00:22:18.834 }, 00:22:18.834 "peer_address": { 00:22:18.834 "trtype": "RDMA", 00:22:18.834 "adrfam": "IPv4", 00:22:18.834 "traddr": "192.168.100.8", 00:22:18.834 "trsvcid": "41567" 00:22:18.834 }, 00:22:18.834 "auth": { 00:22:18.834 "state": "completed", 00:22:18.834 "digest": "sha256", 00:22:18.834 "dhgroup": "null" 00:22:18.834 } 00:22:18.834 } 00:22:18.834 ]' 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.834 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.095 13:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.038 13:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.299 00:22:20.299 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.299 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.299 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.560 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.560 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.560 13:50:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:20.560 13:50:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.560 13:50:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:20.560 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.560 { 00:22:20.560 "cntlid": 7, 00:22:20.560 "qid": 0, 00:22:20.560 "state": "enabled", 00:22:20.560 "listen_address": { 00:22:20.560 "trtype": "RDMA", 00:22:20.560 "adrfam": "IPv4", 00:22:20.560 "traddr": "192.168.100.8", 00:22:20.560 "trsvcid": "4420" 00:22:20.560 }, 00:22:20.560 "peer_address": { 00:22:20.560 "trtype": "RDMA", 00:22:20.560 "adrfam": "IPv4", 00:22:20.560 "traddr": "192.168.100.8", 00:22:20.560 "trsvcid": "38644" 00:22:20.560 }, 00:22:20.560 "auth": { 00:22:20.560 "state": "completed", 00:22:20.560 "digest": "sha256", 00:22:20.560 "dhgroup": "null" 00:22:20.560 } 00:22:20.560 } 00:22:20.560 ]' 00:22:20.560 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.560 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:20.560 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.560 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:20.560 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.822 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.822 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.822 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.822 13:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:22:21.764 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.764 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.764 13:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.764 13:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.764 13:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.764 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.764 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.764 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:21.764 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.025 13:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.286 00:22:22.286 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.286 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.286 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:22.549 { 00:22:22.549 "cntlid": 9, 00:22:22.549 "qid": 0, 00:22:22.549 "state": "enabled", 00:22:22.549 "listen_address": { 00:22:22.549 "trtype": "RDMA", 00:22:22.549 "adrfam": "IPv4", 00:22:22.549 "traddr": "192.168.100.8", 00:22:22.549 "trsvcid": "4420" 00:22:22.549 }, 00:22:22.549 "peer_address": { 00:22:22.549 "trtype": "RDMA", 00:22:22.549 "adrfam": "IPv4", 00:22:22.549 "traddr": "192.168.100.8", 00:22:22.549 "trsvcid": "50634" 00:22:22.549 }, 00:22:22.549 "auth": { 00:22:22.549 "state": "completed", 00:22:22.549 "digest": "sha256", 00:22:22.549 "dhgroup": "ffdhe2048" 00:22:22.549 } 00:22:22.549 } 00:22:22.549 ]' 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.549 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.810 13:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:22:23.752 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.752 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:23.752 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.753 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.014 00:22:24.014 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:24.014 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:24.014 13:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.292 { 00:22:24.292 "cntlid": 11, 00:22:24.292 "qid": 0, 00:22:24.292 "state": "enabled", 00:22:24.292 "listen_address": { 00:22:24.292 "trtype": "RDMA", 00:22:24.292 "adrfam": "IPv4", 00:22:24.292 "traddr": "192.168.100.8", 00:22:24.292 "trsvcid": "4420" 00:22:24.292 }, 00:22:24.292 "peer_address": { 00:22:24.292 "trtype": "RDMA", 00:22:24.292 "adrfam": "IPv4", 00:22:24.292 "traddr": "192.168.100.8", 00:22:24.292 "trsvcid": "42826" 00:22:24.292 }, 00:22:24.292 "auth": { 00:22:24.292 "state": "completed", 00:22:24.292 "digest": "sha256", 00:22:24.292 "dhgroup": "ffdhe2048" 00:22:24.292 } 00:22:24.292 } 00:22:24.292 ]' 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.292 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.553 13:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:22:25.495 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.495 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:25.495 13:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:25.495 13:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.495 13:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:25.495 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.495 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:25.495 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.757 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.018 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.018 { 00:22:26.018 "cntlid": 13, 00:22:26.018 "qid": 0, 00:22:26.018 "state": "enabled", 00:22:26.018 "listen_address": { 00:22:26.018 "trtype": "RDMA", 00:22:26.018 "adrfam": "IPv4", 00:22:26.018 "traddr": "192.168.100.8", 00:22:26.018 "trsvcid": "4420" 00:22:26.018 }, 00:22:26.018 "peer_address": { 00:22:26.018 "trtype": "RDMA", 00:22:26.018 "adrfam": "IPv4", 00:22:26.018 "traddr": "192.168.100.8", 00:22:26.018 "trsvcid": "60172" 00:22:26.018 }, 00:22:26.018 "auth": { 00:22:26.018 "state": "completed", 00:22:26.018 "digest": "sha256", 00:22:26.018 "dhgroup": "ffdhe2048" 00:22:26.018 } 00:22:26.018 } 00:22:26.018 ]' 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:26.018 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.279 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:26.279 13:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.279 13:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.279 13:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.279 13:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.280 13:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:22:27.222 13:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.222 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:27.222 13:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.222 13:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.222 13:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.222 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.222 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:27.222 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:27.482 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:22:27.482 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.482 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:27.482 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:27.482 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:27.483 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.483 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:27.483 13:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.483 13:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.483 13:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.483 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.483 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.744 00:22:27.744 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.744 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.744 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.004 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.005 { 00:22:28.005 "cntlid": 15, 00:22:28.005 "qid": 0, 00:22:28.005 "state": "enabled", 00:22:28.005 "listen_address": { 00:22:28.005 "trtype": "RDMA", 00:22:28.005 "adrfam": "IPv4", 00:22:28.005 "traddr": "192.168.100.8", 00:22:28.005 "trsvcid": "4420" 00:22:28.005 }, 00:22:28.005 "peer_address": { 00:22:28.005 "trtype": "RDMA", 00:22:28.005 "adrfam": "IPv4", 00:22:28.005 "traddr": "192.168.100.8", 00:22:28.005 "trsvcid": "39613" 00:22:28.005 }, 00:22:28.005 "auth": { 00:22:28.005 "state": "completed", 00:22:28.005 "digest": "sha256", 00:22:28.005 "dhgroup": "ffdhe2048" 00:22:28.005 } 00:22:28.005 } 00:22:28.005 ]' 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.005 13:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.266 13:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:22:29.208 13:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.208 13:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:29.208 13:50:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:29.208 13:50:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.208 13:50:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:29.208 13:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.208 13:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.208 13:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:29.208 13:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.468 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.468 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:29.729 { 00:22:29.729 "cntlid": 17, 00:22:29.729 "qid": 0, 00:22:29.729 "state": "enabled", 00:22:29.729 "listen_address": { 00:22:29.729 "trtype": "RDMA", 00:22:29.729 "adrfam": "IPv4", 00:22:29.729 "traddr": "192.168.100.8", 00:22:29.729 "trsvcid": "4420" 00:22:29.729 }, 00:22:29.729 "peer_address": { 00:22:29.729 "trtype": "RDMA", 00:22:29.729 "adrfam": "IPv4", 00:22:29.729 "traddr": "192.168.100.8", 00:22:29.729 "trsvcid": "41870" 00:22:29.729 }, 00:22:29.729 "auth": { 00:22:29.729 "state": "completed", 00:22:29.729 "digest": "sha256", 00:22:29.729 "dhgroup": "ffdhe3072" 00:22:29.729 } 00:22:29.729 } 00:22:29.729 ]' 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:29.729 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:29.989 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:29.989 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:29.989 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.989 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.989 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.989 13:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:22:30.931 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.931 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:30.931 13:50:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.931 13:50:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.931 13:50:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.931 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:30.931 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:30.931 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.192 13:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:31.454 00:22:31.454 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:31.454 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.454 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:31.714 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:31.715 { 00:22:31.715 "cntlid": 19, 00:22:31.715 "qid": 0, 00:22:31.715 "state": "enabled", 00:22:31.715 "listen_address": { 00:22:31.715 "trtype": "RDMA", 00:22:31.715 "adrfam": "IPv4", 00:22:31.715 "traddr": "192.168.100.8", 00:22:31.715 "trsvcid": "4420" 00:22:31.715 }, 00:22:31.715 "peer_address": { 00:22:31.715 "trtype": "RDMA", 00:22:31.715 "adrfam": "IPv4", 00:22:31.715 "traddr": "192.168.100.8", 00:22:31.715 "trsvcid": "38532" 00:22:31.715 }, 00:22:31.715 "auth": { 00:22:31.715 "state": "completed", 00:22:31.715 "digest": "sha256", 00:22:31.715 "dhgroup": "ffdhe3072" 00:22:31.715 } 00:22:31.715 } 00:22:31.715 ]' 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.715 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.975 13:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.916 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.917 13:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:32.917 13:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.917 13:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:32.917 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.917 13:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.177 00:22:33.177 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:33.177 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.177 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:33.439 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.439 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.439 13:50:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.439 13:50:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.439 13:50:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.439 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.439 { 00:22:33.439 "cntlid": 21, 00:22:33.439 "qid": 0, 00:22:33.439 "state": "enabled", 00:22:33.439 "listen_address": { 00:22:33.439 "trtype": "RDMA", 00:22:33.439 "adrfam": "IPv4", 00:22:33.439 "traddr": "192.168.100.8", 00:22:33.439 "trsvcid": "4420" 00:22:33.439 }, 00:22:33.439 "peer_address": { 00:22:33.439 "trtype": "RDMA", 00:22:33.439 "adrfam": "IPv4", 00:22:33.439 "traddr": "192.168.100.8", 00:22:33.439 "trsvcid": "37216" 00:22:33.439 }, 00:22:33.439 "auth": { 00:22:33.439 "state": "completed", 00:22:33.439 "digest": "sha256", 00:22:33.439 "dhgroup": "ffdhe3072" 00:22:33.439 } 00:22:33.439 } 00:22:33.439 ]' 00:22:33.439 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:33.439 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:33.439 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:33.439 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:33.439 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:33.699 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.699 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.699 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.699 13:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:22:34.641 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.641 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:34.641 13:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:34.641 13:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.641 13:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:34.641 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:34.641 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:34.641 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.902 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:35.162 00:22:35.162 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.162 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.162 13:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.162 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.162 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.162 13:50:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:35.162 13:50:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.422 13:50:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:35.422 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.422 { 00:22:35.422 "cntlid": 23, 00:22:35.422 "qid": 0, 00:22:35.422 "state": "enabled", 00:22:35.422 "listen_address": { 00:22:35.422 "trtype": "RDMA", 00:22:35.422 "adrfam": "IPv4", 00:22:35.422 "traddr": "192.168.100.8", 00:22:35.422 "trsvcid": "4420" 00:22:35.422 }, 00:22:35.422 "peer_address": { 00:22:35.422 "trtype": "RDMA", 00:22:35.422 "adrfam": "IPv4", 00:22:35.422 "traddr": "192.168.100.8", 00:22:35.422 "trsvcid": "55197" 00:22:35.422 }, 00:22:35.422 "auth": { 00:22:35.422 "state": "completed", 00:22:35.422 "digest": "sha256", 00:22:35.422 "dhgroup": "ffdhe3072" 00:22:35.422 } 00:22:35.422 } 00:22:35.422 ]' 00:22:35.422 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.422 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:35.422 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:35.422 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:35.422 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.422 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.422 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.423 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.683 13:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:22:36.625 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.625 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.625 13:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.625 13:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.625 13:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.625 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.625 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:36.625 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.626 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.886 00:22:36.886 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:36.886 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:36.886 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.146 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.146 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.146 13:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.146 13:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.146 13:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.146 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:37.146 { 00:22:37.146 "cntlid": 25, 00:22:37.146 "qid": 0, 00:22:37.146 "state": "enabled", 00:22:37.146 "listen_address": { 00:22:37.146 "trtype": "RDMA", 00:22:37.146 "adrfam": "IPv4", 00:22:37.146 "traddr": "192.168.100.8", 00:22:37.146 "trsvcid": "4420" 00:22:37.146 }, 00:22:37.146 "peer_address": { 00:22:37.146 "trtype": "RDMA", 00:22:37.146 "adrfam": "IPv4", 00:22:37.146 "traddr": "192.168.100.8", 00:22:37.146 "trsvcid": "47690" 00:22:37.146 }, 00:22:37.146 "auth": { 00:22:37.146 "state": "completed", 00:22:37.146 "digest": "sha256", 00:22:37.146 "dhgroup": "ffdhe4096" 00:22:37.146 } 00:22:37.146 } 00:22:37.146 ]' 00:22:37.146 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.146 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:37.146 13:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.146 13:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:37.146 13:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:37.407 13:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.407 13:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.407 13:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.407 13:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:22:38.347 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.347 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.347 13:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.347 13:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.347 13:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.347 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:38.347 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:38.347 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.607 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.867 00:22:38.867 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.867 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:38.867 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:39.129 { 00:22:39.129 "cntlid": 27, 00:22:39.129 "qid": 0, 00:22:39.129 "state": "enabled", 00:22:39.129 "listen_address": { 00:22:39.129 "trtype": "RDMA", 00:22:39.129 "adrfam": "IPv4", 00:22:39.129 "traddr": "192.168.100.8", 00:22:39.129 "trsvcid": "4420" 00:22:39.129 }, 00:22:39.129 "peer_address": { 00:22:39.129 "trtype": "RDMA", 00:22:39.129 "adrfam": "IPv4", 00:22:39.129 "traddr": "192.168.100.8", 00:22:39.129 "trsvcid": "49661" 00:22:39.129 }, 00:22:39.129 "auth": { 00:22:39.129 "state": "completed", 00:22:39.129 "digest": "sha256", 00:22:39.129 "dhgroup": "ffdhe4096" 00:22:39.129 } 00:22:39.129 } 00:22:39.129 ]' 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.129 13:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.389 13:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:22:40.367 13:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.367 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.672 00:22:40.672 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:40.672 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:40.672 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.940 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.940 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:40.941 { 00:22:40.941 "cntlid": 29, 00:22:40.941 "qid": 0, 00:22:40.941 "state": "enabled", 00:22:40.941 "listen_address": { 00:22:40.941 "trtype": "RDMA", 00:22:40.941 "adrfam": "IPv4", 00:22:40.941 "traddr": "192.168.100.8", 00:22:40.941 "trsvcid": "4420" 00:22:40.941 }, 00:22:40.941 "peer_address": { 00:22:40.941 "trtype": "RDMA", 00:22:40.941 "adrfam": "IPv4", 00:22:40.941 "traddr": "192.168.100.8", 00:22:40.941 "trsvcid": "60950" 00:22:40.941 }, 00:22:40.941 "auth": { 00:22:40.941 "state": "completed", 00:22:40.941 "digest": "sha256", 00:22:40.941 "dhgroup": "ffdhe4096" 00:22:40.941 } 00:22:40.941 } 00:22:40.941 ]' 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.941 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.201 13:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:22:42.143 13:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.143 13:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:42.143 13:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.143 13:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.143 13:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.143 13:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.143 13:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:42.143 13:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:42.403 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:42.664 00:22:42.664 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:42.664 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:42.664 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.664 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.664 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.664 13:50:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.664 13:50:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.664 13:50:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.664 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:42.664 { 00:22:42.664 "cntlid": 31, 00:22:42.664 "qid": 0, 00:22:42.664 "state": "enabled", 00:22:42.664 "listen_address": { 00:22:42.664 "trtype": "RDMA", 00:22:42.664 "adrfam": "IPv4", 00:22:42.664 "traddr": "192.168.100.8", 00:22:42.664 "trsvcid": "4420" 00:22:42.664 }, 00:22:42.664 "peer_address": { 00:22:42.664 "trtype": "RDMA", 00:22:42.664 "adrfam": "IPv4", 00:22:42.664 "traddr": "192.168.100.8", 00:22:42.664 "trsvcid": "44837" 00:22:42.664 }, 00:22:42.664 "auth": { 00:22:42.664 "state": "completed", 00:22:42.664 "digest": "sha256", 00:22:42.664 "dhgroup": "ffdhe4096" 00:22:42.664 } 00:22:42.664 } 00:22:42.664 ]' 00:22:42.664 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:42.925 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:42.925 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:42.925 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:42.925 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:42.925 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.925 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.925 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.186 13:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:22:43.756 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.015 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:44.015 13:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.015 13:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.015 13:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.015 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.015 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:44.015 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:44.015 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.275 13:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.536 00:22:44.536 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:44.536 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:44.536 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.796 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.796 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.796 13:50:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.796 13:50:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.796 13:50:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.796 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:44.796 { 00:22:44.796 "cntlid": 33, 00:22:44.796 "qid": 0, 00:22:44.796 "state": "enabled", 00:22:44.796 "listen_address": { 00:22:44.796 "trtype": "RDMA", 00:22:44.796 "adrfam": "IPv4", 00:22:44.796 "traddr": "192.168.100.8", 00:22:44.796 "trsvcid": "4420" 00:22:44.796 }, 00:22:44.796 "peer_address": { 00:22:44.796 "trtype": "RDMA", 00:22:44.796 "adrfam": "IPv4", 00:22:44.796 "traddr": "192.168.100.8", 00:22:44.796 "trsvcid": "57598" 00:22:44.796 }, 00:22:44.796 "auth": { 00:22:44.796 "state": "completed", 00:22:44.796 "digest": "sha256", 00:22:44.796 "dhgroup": "ffdhe6144" 00:22:44.796 } 00:22:44.796 } 00:22:44.796 ]' 00:22:44.796 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:44.796 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:44.797 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:44.797 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:44.797 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:44.797 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.797 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.797 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.057 13:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:45.997 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.257 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.257 13:50:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.257 13:50:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.257 13:50:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.257 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.257 13:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.516 00:22:46.516 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:46.516 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:46.516 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:46.775 { 00:22:46.775 "cntlid": 35, 00:22:46.775 "qid": 0, 00:22:46.775 "state": "enabled", 00:22:46.775 "listen_address": { 00:22:46.775 "trtype": "RDMA", 00:22:46.775 "adrfam": "IPv4", 00:22:46.775 "traddr": "192.168.100.8", 00:22:46.775 "trsvcid": "4420" 00:22:46.775 }, 00:22:46.775 "peer_address": { 00:22:46.775 "trtype": "RDMA", 00:22:46.775 "adrfam": "IPv4", 00:22:46.775 "traddr": "192.168.100.8", 00:22:46.775 "trsvcid": "49991" 00:22:46.775 }, 00:22:46.775 "auth": { 00:22:46.775 "state": "completed", 00:22:46.775 "digest": "sha256", 00:22:46.775 "dhgroup": "ffdhe6144" 00:22:46.775 } 00:22:46.775 } 00:22:46.775 ]' 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.775 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.035 13:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:22:47.977 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.977 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.977 13:50:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.977 13:50:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.977 13:50:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.977 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:47.977 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:47.977 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.238 13:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.497 00:22:48.497 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:48.497 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:48.497 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:48.757 { 00:22:48.757 "cntlid": 37, 00:22:48.757 "qid": 0, 00:22:48.757 "state": "enabled", 00:22:48.757 "listen_address": { 00:22:48.757 "trtype": "RDMA", 00:22:48.757 "adrfam": "IPv4", 00:22:48.757 "traddr": "192.168.100.8", 00:22:48.757 "trsvcid": "4420" 00:22:48.757 }, 00:22:48.757 "peer_address": { 00:22:48.757 "trtype": "RDMA", 00:22:48.757 "adrfam": "IPv4", 00:22:48.757 "traddr": "192.168.100.8", 00:22:48.757 "trsvcid": "52630" 00:22:48.757 }, 00:22:48.757 "auth": { 00:22:48.757 "state": "completed", 00:22:48.757 "digest": "sha256", 00:22:48.757 "dhgroup": "ffdhe6144" 00:22:48.757 } 00:22:48.757 } 00:22:48.757 ]' 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.757 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.017 13:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:49.955 13:50:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.215 13:50:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.215 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:50.215 13:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:50.475 00:22:50.475 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.475 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.475 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:50.735 { 00:22:50.735 "cntlid": 39, 00:22:50.735 "qid": 0, 00:22:50.735 "state": "enabled", 00:22:50.735 "listen_address": { 00:22:50.735 "trtype": "RDMA", 00:22:50.735 "adrfam": "IPv4", 00:22:50.735 "traddr": "192.168.100.8", 00:22:50.735 "trsvcid": "4420" 00:22:50.735 }, 00:22:50.735 "peer_address": { 00:22:50.735 "trtype": "RDMA", 00:22:50.735 "adrfam": "IPv4", 00:22:50.735 "traddr": "192.168.100.8", 00:22:50.735 "trsvcid": "48825" 00:22:50.735 }, 00:22:50.735 "auth": { 00:22:50.735 "state": "completed", 00:22:50.735 "digest": "sha256", 00:22:50.735 "dhgroup": "ffdhe6144" 00:22:50.735 } 00:22:50.735 } 00:22:50.735 ]' 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.735 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.995 13:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:22:51.935 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.935 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:51.935 13:50:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.935 13:50:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.935 13:50:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.935 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:51.935 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:51.935 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:51.935 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.195 13:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.765 00:22:52.765 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.765 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:52.766 { 00:22:52.766 "cntlid": 41, 00:22:52.766 "qid": 0, 00:22:52.766 "state": "enabled", 00:22:52.766 "listen_address": { 00:22:52.766 "trtype": "RDMA", 00:22:52.766 "adrfam": "IPv4", 00:22:52.766 "traddr": "192.168.100.8", 00:22:52.766 "trsvcid": "4420" 00:22:52.766 }, 00:22:52.766 "peer_address": { 00:22:52.766 "trtype": "RDMA", 00:22:52.766 "adrfam": "IPv4", 00:22:52.766 "traddr": "192.168.100.8", 00:22:52.766 "trsvcid": "55793" 00:22:52.766 }, 00:22:52.766 "auth": { 00:22:52.766 "state": "completed", 00:22:52.766 "digest": "sha256", 00:22:52.766 "dhgroup": "ffdhe8192" 00:22:52.766 } 00:22:52.766 } 00:22:52.766 ]' 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:52.766 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:53.026 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.026 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.026 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.026 13:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:22:53.967 13:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.967 13:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:53.967 13:50:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:53.967 13:50:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.967 13:50:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:53.967 13:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:53.967 13:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:53.967 13:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.228 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.800 00:22:54.800 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:54.800 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:54.800 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.060 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.060 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.060 13:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.060 13:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.060 13:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.060 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:55.060 { 00:22:55.060 "cntlid": 43, 00:22:55.060 "qid": 0, 00:22:55.060 "state": "enabled", 00:22:55.060 "listen_address": { 00:22:55.060 "trtype": "RDMA", 00:22:55.060 "adrfam": "IPv4", 00:22:55.061 "traddr": "192.168.100.8", 00:22:55.061 "trsvcid": "4420" 00:22:55.061 }, 00:22:55.061 "peer_address": { 00:22:55.061 "trtype": "RDMA", 00:22:55.061 "adrfam": "IPv4", 00:22:55.061 "traddr": "192.168.100.8", 00:22:55.061 "trsvcid": "36009" 00:22:55.061 }, 00:22:55.061 "auth": { 00:22:55.061 "state": "completed", 00:22:55.061 "digest": "sha256", 00:22:55.061 "dhgroup": "ffdhe8192" 00:22:55.061 } 00:22:55.061 } 00:22:55.061 ]' 00:22:55.061 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:55.061 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:55.061 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:55.061 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:55.061 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:55.061 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.061 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.061 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.320 13:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:22:55.891 13:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.152 13:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:56.152 13:50:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.152 13:50:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.152 13:50:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.152 13:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:56.152 13:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:56.152 13:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.413 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.985 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:56.985 { 00:22:56.985 "cntlid": 45, 00:22:56.985 "qid": 0, 00:22:56.985 "state": "enabled", 00:22:56.985 "listen_address": { 00:22:56.985 "trtype": "RDMA", 00:22:56.985 "adrfam": "IPv4", 00:22:56.985 "traddr": "192.168.100.8", 00:22:56.985 "trsvcid": "4420" 00:22:56.985 }, 00:22:56.985 "peer_address": { 00:22:56.985 "trtype": "RDMA", 00:22:56.985 "adrfam": "IPv4", 00:22:56.985 "traddr": "192.168.100.8", 00:22:56.985 "trsvcid": "60658" 00:22:56.985 }, 00:22:56.985 "auth": { 00:22:56.985 "state": "completed", 00:22:56.985 "digest": "sha256", 00:22:56.985 "dhgroup": "ffdhe8192" 00:22:56.985 } 00:22:56.985 } 00:22:56.985 ]' 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.985 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:57.245 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:57.245 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.245 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.245 13:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.245 13:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:22:58.186 13:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.186 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:58.186 13:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.186 13:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.186 13:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.186 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:58.186 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:58.186 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:58.447 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:59.018 00:22:59.018 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:59.018 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:59.018 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.018 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.018 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.018 13:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:59.018 13:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.018 13:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:59.018 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:59.018 { 00:22:59.018 "cntlid": 47, 00:22:59.018 "qid": 0, 00:22:59.018 "state": "enabled", 00:22:59.018 "listen_address": { 00:22:59.018 "trtype": "RDMA", 00:22:59.018 "adrfam": "IPv4", 00:22:59.018 "traddr": "192.168.100.8", 00:22:59.018 "trsvcid": "4420" 00:22:59.018 }, 00:22:59.018 "peer_address": { 00:22:59.018 "trtype": "RDMA", 00:22:59.018 "adrfam": "IPv4", 00:22:59.018 "traddr": "192.168.100.8", 00:22:59.018 "trsvcid": "56239" 00:22:59.018 }, 00:22:59.018 "auth": { 00:22:59.018 "state": "completed", 00:22:59.018 "digest": "sha256", 00:22:59.018 "dhgroup": "ffdhe8192" 00:22:59.018 } 00:22:59.018 } 00:22:59.018 ]' 00:22:59.018 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:59.278 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:59.278 13:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:59.278 13:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:59.278 13:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:59.278 13:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.278 13:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.278 13:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.539 13:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.480 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.740 00:23:00.740 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:00.740 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:00.740 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.001 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.001 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.001 13:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.001 13:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.001 13:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.001 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:01.001 { 00:23:01.001 "cntlid": 49, 00:23:01.001 "qid": 0, 00:23:01.001 "state": "enabled", 00:23:01.001 "listen_address": { 00:23:01.001 "trtype": "RDMA", 00:23:01.001 "adrfam": "IPv4", 00:23:01.001 "traddr": "192.168.100.8", 00:23:01.001 "trsvcid": "4420" 00:23:01.001 }, 00:23:01.001 "peer_address": { 00:23:01.001 "trtype": "RDMA", 00:23:01.001 "adrfam": "IPv4", 00:23:01.001 "traddr": "192.168.100.8", 00:23:01.001 "trsvcid": "43139" 00:23:01.001 }, 00:23:01.001 "auth": { 00:23:01.001 "state": "completed", 00:23:01.001 "digest": "sha384", 00:23:01.001 "dhgroup": "null" 00:23:01.001 } 00:23:01.001 } 00:23:01.001 ]' 00:23:01.001 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:01.001 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:01.001 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.001 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:01.001 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.262 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.262 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.262 13:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.262 13:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:23:02.203 13:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.203 13:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:02.203 13:50:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.203 13:50:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.203 13:50:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.203 13:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:02.203 13:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:02.203 13:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.464 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.464 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:02.725 { 00:23:02.725 "cntlid": 51, 00:23:02.725 "qid": 0, 00:23:02.725 "state": "enabled", 00:23:02.725 "listen_address": { 00:23:02.725 "trtype": "RDMA", 00:23:02.725 "adrfam": "IPv4", 00:23:02.725 "traddr": "192.168.100.8", 00:23:02.725 "trsvcid": "4420" 00:23:02.725 }, 00:23:02.725 "peer_address": { 00:23:02.725 "trtype": "RDMA", 00:23:02.725 "adrfam": "IPv4", 00:23:02.725 "traddr": "192.168.100.8", 00:23:02.725 "trsvcid": "53585" 00:23:02.725 }, 00:23:02.725 "auth": { 00:23:02.725 "state": "completed", 00:23:02.725 "digest": "sha384", 00:23:02.725 "dhgroup": "null" 00:23:02.725 } 00:23:02.725 } 00:23:02.725 ]' 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:02.725 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:02.985 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.985 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.985 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.985 13:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:23:03.927 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.927 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:03.927 13:50:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.927 13:50:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.927 13:50:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.927 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:03.927 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:03.927 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.187 13:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.447 00:23:04.447 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:04.447 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:04.447 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.447 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.447 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.447 13:50:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:04.447 13:50:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.447 13:50:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:04.447 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:04.447 { 00:23:04.447 "cntlid": 53, 00:23:04.447 "qid": 0, 00:23:04.447 "state": "enabled", 00:23:04.447 "listen_address": { 00:23:04.447 "trtype": "RDMA", 00:23:04.447 "adrfam": "IPv4", 00:23:04.447 "traddr": "192.168.100.8", 00:23:04.447 "trsvcid": "4420" 00:23:04.447 }, 00:23:04.447 "peer_address": { 00:23:04.447 "trtype": "RDMA", 00:23:04.447 "adrfam": "IPv4", 00:23:04.447 "traddr": "192.168.100.8", 00:23:04.447 "trsvcid": "53723" 00:23:04.447 }, 00:23:04.447 "auth": { 00:23:04.447 "state": "completed", 00:23:04.447 "digest": "sha384", 00:23:04.447 "dhgroup": "null" 00:23:04.448 } 00:23:04.448 } 00:23:04.448 ]' 00:23:04.448 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.708 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:04.708 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:04.708 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:04.708 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.708 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.708 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.708 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.708 13:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:23:05.661 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:05.988 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:06.248 00:23:06.248 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:06.248 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.248 13:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:06.248 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.249 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.249 13:50:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.249 13:50:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.249 13:50:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.249 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:06.249 { 00:23:06.249 "cntlid": 55, 00:23:06.249 "qid": 0, 00:23:06.249 "state": "enabled", 00:23:06.249 "listen_address": { 00:23:06.249 "trtype": "RDMA", 00:23:06.249 "adrfam": "IPv4", 00:23:06.249 "traddr": "192.168.100.8", 00:23:06.249 "trsvcid": "4420" 00:23:06.249 }, 00:23:06.249 "peer_address": { 00:23:06.249 "trtype": "RDMA", 00:23:06.249 "adrfam": "IPv4", 00:23:06.249 "traddr": "192.168.100.8", 00:23:06.249 "trsvcid": "53859" 00:23:06.249 }, 00:23:06.249 "auth": { 00:23:06.249 "state": "completed", 00:23:06.249 "digest": "sha384", 00:23:06.249 "dhgroup": "null" 00:23:06.249 } 00:23:06.249 } 00:23:06.249 ]' 00:23:06.249 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:06.509 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:06.509 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:06.509 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:06.509 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:06.509 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.509 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.509 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.769 13:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:23:07.339 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.600 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:07.600 13:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.600 13:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.600 13:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.600 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.600 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:07.600 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:07.600 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.861 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.121 00:23:08.121 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:08.121 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:08.121 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.121 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.121 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.121 13:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.121 13:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.121 13:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.121 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:08.121 { 00:23:08.121 "cntlid": 57, 00:23:08.121 "qid": 0, 00:23:08.121 "state": "enabled", 00:23:08.121 "listen_address": { 00:23:08.121 "trtype": "RDMA", 00:23:08.121 "adrfam": "IPv4", 00:23:08.121 "traddr": "192.168.100.8", 00:23:08.121 "trsvcid": "4420" 00:23:08.121 }, 00:23:08.121 "peer_address": { 00:23:08.121 "trtype": "RDMA", 00:23:08.121 "adrfam": "IPv4", 00:23:08.121 "traddr": "192.168.100.8", 00:23:08.121 "trsvcid": "37754" 00:23:08.121 }, 00:23:08.121 "auth": { 00:23:08.121 "state": "completed", 00:23:08.121 "digest": "sha384", 00:23:08.122 "dhgroup": "ffdhe2048" 00:23:08.122 } 00:23:08.122 } 00:23:08.122 ]' 00:23:08.122 13:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:08.122 13:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:08.122 13:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:08.382 13:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:08.382 13:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:08.382 13:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.382 13:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.382 13:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.382 13:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:23:09.323 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.323 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:09.323 13:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.323 13:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.323 13:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.323 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:09.323 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:09.323 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.584 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.844 00:23:09.844 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:09.844 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:09.844 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:10.105 { 00:23:10.105 "cntlid": 59, 00:23:10.105 "qid": 0, 00:23:10.105 "state": "enabled", 00:23:10.105 "listen_address": { 00:23:10.105 "trtype": "RDMA", 00:23:10.105 "adrfam": "IPv4", 00:23:10.105 "traddr": "192.168.100.8", 00:23:10.105 "trsvcid": "4420" 00:23:10.105 }, 00:23:10.105 "peer_address": { 00:23:10.105 "trtype": "RDMA", 00:23:10.105 "adrfam": "IPv4", 00:23:10.105 "traddr": "192.168.100.8", 00:23:10.105 "trsvcid": "38479" 00:23:10.105 }, 00:23:10.105 "auth": { 00:23:10.105 "state": "completed", 00:23:10.105 "digest": "sha384", 00:23:10.105 "dhgroup": "ffdhe2048" 00:23:10.105 } 00:23:10.105 } 00:23:10.105 ]' 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:10.105 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.106 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.106 13:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.366 13:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:23:11.309 13:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.309 13:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.569 13:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.569 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.569 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.570 00:23:11.570 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:11.570 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:11.570 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:11.830 { 00:23:11.830 "cntlid": 61, 00:23:11.830 "qid": 0, 00:23:11.830 "state": "enabled", 00:23:11.830 "listen_address": { 00:23:11.830 "trtype": "RDMA", 00:23:11.830 "adrfam": "IPv4", 00:23:11.830 "traddr": "192.168.100.8", 00:23:11.830 "trsvcid": "4420" 00:23:11.830 }, 00:23:11.830 "peer_address": { 00:23:11.830 "trtype": "RDMA", 00:23:11.830 "adrfam": "IPv4", 00:23:11.830 "traddr": "192.168.100.8", 00:23:11.830 "trsvcid": "39861" 00:23:11.830 }, 00:23:11.830 "auth": { 00:23:11.830 "state": "completed", 00:23:11.830 "digest": "sha384", 00:23:11.830 "dhgroup": "ffdhe2048" 00:23:11.830 } 00:23:11.830 } 00:23:11.830 ]' 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.830 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.090 13:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:23:13.031 13:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.031 13:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:13.031 13:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.031 13:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.031 13:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.031 13:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:13.031 13:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:13.031 13:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:13.292 13:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:23:13.292 13:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:13.292 13:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:13.292 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:13.292 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:13.292 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.292 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:13.292 13:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.292 13:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.292 13:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.292 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:13.292 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:13.553 00:23:13.553 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.553 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.553 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:13.553 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.553 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.553 13:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.553 13:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.553 13:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.553 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:13.553 { 00:23:13.553 "cntlid": 63, 00:23:13.553 "qid": 0, 00:23:13.553 "state": "enabled", 00:23:13.553 "listen_address": { 00:23:13.553 "trtype": "RDMA", 00:23:13.553 "adrfam": "IPv4", 00:23:13.553 "traddr": "192.168.100.8", 00:23:13.553 "trsvcid": "4420" 00:23:13.553 }, 00:23:13.553 "peer_address": { 00:23:13.553 "trtype": "RDMA", 00:23:13.553 "adrfam": "IPv4", 00:23:13.553 "traddr": "192.168.100.8", 00:23:13.553 "trsvcid": "36457" 00:23:13.553 }, 00:23:13.553 "auth": { 00:23:13.553 "state": "completed", 00:23:13.553 "digest": "sha384", 00:23:13.553 "dhgroup": "ffdhe2048" 00:23:13.553 } 00:23:13.553 } 00:23:13.553 ]' 00:23:13.553 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:13.813 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:13.813 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:13.813 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:13.813 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:13.813 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.813 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.813 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.813 13:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:23:14.754 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.754 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:14.754 13:51:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:14.754 13:51:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.014 13:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.274 00:23:15.274 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.274 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.274 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:15.536 { 00:23:15.536 "cntlid": 65, 00:23:15.536 "qid": 0, 00:23:15.536 "state": "enabled", 00:23:15.536 "listen_address": { 00:23:15.536 "trtype": "RDMA", 00:23:15.536 "adrfam": "IPv4", 00:23:15.536 "traddr": "192.168.100.8", 00:23:15.536 "trsvcid": "4420" 00:23:15.536 }, 00:23:15.536 "peer_address": { 00:23:15.536 "trtype": "RDMA", 00:23:15.536 "adrfam": "IPv4", 00:23:15.536 "traddr": "192.168.100.8", 00:23:15.536 "trsvcid": "46636" 00:23:15.536 }, 00:23:15.536 "auth": { 00:23:15.536 "state": "completed", 00:23:15.536 "digest": "sha384", 00:23:15.536 "dhgroup": "ffdhe3072" 00:23:15.536 } 00:23:15.536 } 00:23:15.536 ]' 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.536 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.797 13:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:23:16.739 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.739 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.739 13:51:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.739 13:51:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.739 13:51:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.739 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:16.739 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:16.739 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:17.000 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:23:17.000 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:17.000 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:17.000 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:17.001 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:17.001 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.001 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.001 13:51:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:17.001 13:51:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.001 13:51:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:17.001 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.001 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.262 00:23:17.262 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:17.262 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:17.262 13:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.262 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.262 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.262 13:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:17.262 13:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.262 13:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:17.262 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:17.262 { 00:23:17.262 "cntlid": 67, 00:23:17.262 "qid": 0, 00:23:17.262 "state": "enabled", 00:23:17.262 "listen_address": { 00:23:17.262 "trtype": "RDMA", 00:23:17.262 "adrfam": "IPv4", 00:23:17.262 "traddr": "192.168.100.8", 00:23:17.262 "trsvcid": "4420" 00:23:17.262 }, 00:23:17.262 "peer_address": { 00:23:17.262 "trtype": "RDMA", 00:23:17.262 "adrfam": "IPv4", 00:23:17.262 "traddr": "192.168.100.8", 00:23:17.262 "trsvcid": "32906" 00:23:17.262 }, 00:23:17.262 "auth": { 00:23:17.262 "state": "completed", 00:23:17.262 "digest": "sha384", 00:23:17.262 "dhgroup": "ffdhe3072" 00:23:17.262 } 00:23:17.262 } 00:23:17.262 ]' 00:23:17.262 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:17.262 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:17.262 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:17.522 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:17.522 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:17.522 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.522 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.522 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.522 13:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:23:18.463 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.463 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:18.463 13:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.463 13:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.463 13:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.463 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:18.463 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:18.463 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.724 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.986 00:23:18.986 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:18.986 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:18.986 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.247 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.247 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.247 13:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.247 13:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.247 13:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:19.247 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:19.247 { 00:23:19.247 "cntlid": 69, 00:23:19.247 "qid": 0, 00:23:19.247 "state": "enabled", 00:23:19.247 "listen_address": { 00:23:19.247 "trtype": "RDMA", 00:23:19.247 "adrfam": "IPv4", 00:23:19.247 "traddr": "192.168.100.8", 00:23:19.247 "trsvcid": "4420" 00:23:19.247 }, 00:23:19.247 "peer_address": { 00:23:19.247 "trtype": "RDMA", 00:23:19.247 "adrfam": "IPv4", 00:23:19.247 "traddr": "192.168.100.8", 00:23:19.247 "trsvcid": "47952" 00:23:19.247 }, 00:23:19.247 "auth": { 00:23:19.247 "state": "completed", 00:23:19.247 "digest": "sha384", 00:23:19.247 "dhgroup": "ffdhe3072" 00:23:19.247 } 00:23:19.247 } 00:23:19.247 ]' 00:23:19.247 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:19.247 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:19.247 13:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:19.247 13:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:19.247 13:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:19.247 13:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.247 13:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.247 13:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.507 13:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:23:20.445 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.446 13:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.706 13:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.706 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:20.706 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:20.706 00:23:20.706 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:20.706 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.706 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:20.967 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.967 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.967 13:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.967 13:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.967 13:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.967 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:20.967 { 00:23:20.967 "cntlid": 71, 00:23:20.967 "qid": 0, 00:23:20.967 "state": "enabled", 00:23:20.967 "listen_address": { 00:23:20.967 "trtype": "RDMA", 00:23:20.967 "adrfam": "IPv4", 00:23:20.967 "traddr": "192.168.100.8", 00:23:20.967 "trsvcid": "4420" 00:23:20.967 }, 00:23:20.967 "peer_address": { 00:23:20.967 "trtype": "RDMA", 00:23:20.967 "adrfam": "IPv4", 00:23:20.967 "traddr": "192.168.100.8", 00:23:20.967 "trsvcid": "40365" 00:23:20.967 }, 00:23:20.967 "auth": { 00:23:20.967 "state": "completed", 00:23:20.967 "digest": "sha384", 00:23:20.967 "dhgroup": "ffdhe3072" 00:23:20.967 } 00:23:20.967 } 00:23:20.967 ]' 00:23:20.967 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:20.967 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:20.967 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:20.967 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:20.967 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:21.227 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.227 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.227 13:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.227 13:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:23:22.168 13:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.168 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:22.168 13:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.168 13:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.168 13:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.168 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.168 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:22.168 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:22.168 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.429 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.690 00:23:22.690 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:22.690 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:22.690 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:22.952 { 00:23:22.952 "cntlid": 73, 00:23:22.952 "qid": 0, 00:23:22.952 "state": "enabled", 00:23:22.952 "listen_address": { 00:23:22.952 "trtype": "RDMA", 00:23:22.952 "adrfam": "IPv4", 00:23:22.952 "traddr": "192.168.100.8", 00:23:22.952 "trsvcid": "4420" 00:23:22.952 }, 00:23:22.952 "peer_address": { 00:23:22.952 "trtype": "RDMA", 00:23:22.952 "adrfam": "IPv4", 00:23:22.952 "traddr": "192.168.100.8", 00:23:22.952 "trsvcid": "40242" 00:23:22.952 }, 00:23:22.952 "auth": { 00:23:22.952 "state": "completed", 00:23:22.952 "digest": "sha384", 00:23:22.952 "dhgroup": "ffdhe4096" 00:23:22.952 } 00:23:22.952 } 00:23:22.952 ]' 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.952 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.212 13:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:23:24.150 13:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.150 13:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:24.150 13:51:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.151 13:51:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.151 13:51:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.151 13:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:24.151 13:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:24.151 13:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.151 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.410 00:23:24.410 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:24.410 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:24.410 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.671 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.671 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.671 13:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.671 13:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.671 13:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.671 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:24.671 { 00:23:24.671 "cntlid": 75, 00:23:24.671 "qid": 0, 00:23:24.671 "state": "enabled", 00:23:24.671 "listen_address": { 00:23:24.671 "trtype": "RDMA", 00:23:24.671 "adrfam": "IPv4", 00:23:24.671 "traddr": "192.168.100.8", 00:23:24.671 "trsvcid": "4420" 00:23:24.671 }, 00:23:24.671 "peer_address": { 00:23:24.671 "trtype": "RDMA", 00:23:24.671 "adrfam": "IPv4", 00:23:24.671 "traddr": "192.168.100.8", 00:23:24.671 "trsvcid": "40607" 00:23:24.671 }, 00:23:24.671 "auth": { 00:23:24.671 "state": "completed", 00:23:24.671 "digest": "sha384", 00:23:24.671 "dhgroup": "ffdhe4096" 00:23:24.671 } 00:23:24.671 } 00:23:24.671 ]' 00:23:24.671 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:24.671 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:24.671 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:24.671 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:24.671 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:24.931 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.931 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.931 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.931 13:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:23:25.873 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.873 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.873 13:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:25.873 13:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.873 13:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:25.873 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:25.873 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:25.873 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:26.134 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:23:26.134 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:26.134 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:26.134 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:26.134 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:26.134 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.134 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.134 13:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.134 13:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.134 13:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.134 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.135 13:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.395 00:23:26.395 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:26.395 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.395 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:26.395 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.395 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.395 13:51:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.395 13:51:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.395 13:51:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.395 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:26.395 { 00:23:26.395 "cntlid": 77, 00:23:26.395 "qid": 0, 00:23:26.395 "state": "enabled", 00:23:26.395 "listen_address": { 00:23:26.395 "trtype": "RDMA", 00:23:26.395 "adrfam": "IPv4", 00:23:26.395 "traddr": "192.168.100.8", 00:23:26.395 "trsvcid": "4420" 00:23:26.395 }, 00:23:26.395 "peer_address": { 00:23:26.395 "trtype": "RDMA", 00:23:26.395 "adrfam": "IPv4", 00:23:26.395 "traddr": "192.168.100.8", 00:23:26.395 "trsvcid": "38393" 00:23:26.395 }, 00:23:26.395 "auth": { 00:23:26.395 "state": "completed", 00:23:26.395 "digest": "sha384", 00:23:26.395 "dhgroup": "ffdhe4096" 00:23:26.395 } 00:23:26.395 } 00:23:26.395 ]' 00:23:26.395 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:26.656 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:26.656 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:26.656 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:26.656 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:26.656 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.656 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.656 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.916 13:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:27.858 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:28.119 00:23:28.119 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:28.119 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:28.119 13:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.381 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.381 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.381 13:51:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.381 13:51:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.381 13:51:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:28.381 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:28.381 { 00:23:28.381 "cntlid": 79, 00:23:28.381 "qid": 0, 00:23:28.381 "state": "enabled", 00:23:28.381 "listen_address": { 00:23:28.381 "trtype": "RDMA", 00:23:28.381 "adrfam": "IPv4", 00:23:28.381 "traddr": "192.168.100.8", 00:23:28.381 "trsvcid": "4420" 00:23:28.381 }, 00:23:28.381 "peer_address": { 00:23:28.381 "trtype": "RDMA", 00:23:28.381 "adrfam": "IPv4", 00:23:28.381 "traddr": "192.168.100.8", 00:23:28.381 "trsvcid": "45833" 00:23:28.381 }, 00:23:28.381 "auth": { 00:23:28.381 "state": "completed", 00:23:28.381 "digest": "sha384", 00:23:28.381 "dhgroup": "ffdhe4096" 00:23:28.381 } 00:23:28.381 } 00:23:28.381 ]' 00:23:28.381 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:28.381 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:28.381 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:28.381 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:28.381 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:28.642 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.642 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.642 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.642 13:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:23:29.582 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.583 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:29.583 13:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.583 13:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.583 13:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.583 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.583 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:29.583 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:29.583 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.843 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.104 00:23:30.104 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:30.104 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:30.104 13:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:30.365 { 00:23:30.365 "cntlid": 81, 00:23:30.365 "qid": 0, 00:23:30.365 "state": "enabled", 00:23:30.365 "listen_address": { 00:23:30.365 "trtype": "RDMA", 00:23:30.365 "adrfam": "IPv4", 00:23:30.365 "traddr": "192.168.100.8", 00:23:30.365 "trsvcid": "4420" 00:23:30.365 }, 00:23:30.365 "peer_address": { 00:23:30.365 "trtype": "RDMA", 00:23:30.365 "adrfam": "IPv4", 00:23:30.365 "traddr": "192.168.100.8", 00:23:30.365 "trsvcid": "60051" 00:23:30.365 }, 00:23:30.365 "auth": { 00:23:30.365 "state": "completed", 00:23:30.365 "digest": "sha384", 00:23:30.365 "dhgroup": "ffdhe6144" 00:23:30.365 } 00:23:30.365 } 00:23:30.365 ]' 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.365 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.625 13:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:31.636 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:31.637 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.637 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.637 13:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:31.637 13:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.896 13:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:31.896 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.896 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.156 00:23:32.156 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:32.156 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:32.156 13:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.156 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.156 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.156 13:51:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.156 13:51:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.416 13:51:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.416 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:32.416 { 00:23:32.416 "cntlid": 83, 00:23:32.416 "qid": 0, 00:23:32.416 "state": "enabled", 00:23:32.416 "listen_address": { 00:23:32.416 "trtype": "RDMA", 00:23:32.416 "adrfam": "IPv4", 00:23:32.416 "traddr": "192.168.100.8", 00:23:32.416 "trsvcid": "4420" 00:23:32.416 }, 00:23:32.416 "peer_address": { 00:23:32.416 "trtype": "RDMA", 00:23:32.416 "adrfam": "IPv4", 00:23:32.416 "traddr": "192.168.100.8", 00:23:32.416 "trsvcid": "58155" 00:23:32.416 }, 00:23:32.416 "auth": { 00:23:32.416 "state": "completed", 00:23:32.416 "digest": "sha384", 00:23:32.416 "dhgroup": "ffdhe6144" 00:23:32.416 } 00:23:32.416 } 00:23:32.416 ]' 00:23:32.416 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:32.416 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:32.416 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:32.416 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:32.416 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:32.416 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.416 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.416 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.675 13:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:23:33.615 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.615 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:33.615 13:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.615 13:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.615 13:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.615 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:33.615 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.616 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.187 00:23:34.187 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:34.187 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.187 13:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:34.187 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.187 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.187 13:51:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.187 13:51:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.187 13:51:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.187 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:34.187 { 00:23:34.187 "cntlid": 85, 00:23:34.187 "qid": 0, 00:23:34.187 "state": "enabled", 00:23:34.187 "listen_address": { 00:23:34.187 "trtype": "RDMA", 00:23:34.187 "adrfam": "IPv4", 00:23:34.187 "traddr": "192.168.100.8", 00:23:34.187 "trsvcid": "4420" 00:23:34.187 }, 00:23:34.187 "peer_address": { 00:23:34.187 "trtype": "RDMA", 00:23:34.187 "adrfam": "IPv4", 00:23:34.187 "traddr": "192.168.100.8", 00:23:34.187 "trsvcid": "39777" 00:23:34.187 }, 00:23:34.187 "auth": { 00:23:34.187 "state": "completed", 00:23:34.187 "digest": "sha384", 00:23:34.187 "dhgroup": "ffdhe6144" 00:23:34.187 } 00:23:34.187 } 00:23:34.187 ]' 00:23:34.187 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:34.447 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:34.447 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:34.447 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:34.447 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:34.447 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.447 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.447 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.708 13:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:23:35.280 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.540 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:35.540 13:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.540 13:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.540 13:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.540 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:35.540 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:35.540 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:35.800 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:23:35.800 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:35.800 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:35.800 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:35.800 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:35.800 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.800 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:35.800 13:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.800 13:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.800 13:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.800 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:35.801 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:36.061 00:23:36.061 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:36.061 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:36.061 13:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:36.321 { 00:23:36.321 "cntlid": 87, 00:23:36.321 "qid": 0, 00:23:36.321 "state": "enabled", 00:23:36.321 "listen_address": { 00:23:36.321 "trtype": "RDMA", 00:23:36.321 "adrfam": "IPv4", 00:23:36.321 "traddr": "192.168.100.8", 00:23:36.321 "trsvcid": "4420" 00:23:36.321 }, 00:23:36.321 "peer_address": { 00:23:36.321 "trtype": "RDMA", 00:23:36.321 "adrfam": "IPv4", 00:23:36.321 "traddr": "192.168.100.8", 00:23:36.321 "trsvcid": "55027" 00:23:36.321 }, 00:23:36.321 "auth": { 00:23:36.321 "state": "completed", 00:23:36.321 "digest": "sha384", 00:23:36.321 "dhgroup": "ffdhe6144" 00:23:36.321 } 00:23:36.321 } 00:23:36.321 ]' 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.321 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.582 13:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:23:37.523 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.523 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.523 13:51:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.523 13:51:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.523 13:51:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.523 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.523 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:37.523 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:37.523 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.785 13:51:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.356 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:38.356 { 00:23:38.356 "cntlid": 89, 00:23:38.356 "qid": 0, 00:23:38.356 "state": "enabled", 00:23:38.356 "listen_address": { 00:23:38.356 "trtype": "RDMA", 00:23:38.356 "adrfam": "IPv4", 00:23:38.356 "traddr": "192.168.100.8", 00:23:38.356 "trsvcid": "4420" 00:23:38.356 }, 00:23:38.356 "peer_address": { 00:23:38.356 "trtype": "RDMA", 00:23:38.356 "adrfam": "IPv4", 00:23:38.356 "traddr": "192.168.100.8", 00:23:38.356 "trsvcid": "48548" 00:23:38.356 }, 00:23:38.356 "auth": { 00:23:38.356 "state": "completed", 00:23:38.356 "digest": "sha384", 00:23:38.356 "dhgroup": "ffdhe8192" 00:23:38.356 } 00:23:38.356 } 00:23:38.356 ]' 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:38.356 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:38.617 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:38.617 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:38.617 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.617 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.617 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.617 13:51:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:23:39.558 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.558 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:39.558 13:51:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.558 13:51:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.558 13:51:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.558 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:39.558 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:39.558 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.820 13:51:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.391 00:23:40.392 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:40.392 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:40.392 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:40.652 { 00:23:40.652 "cntlid": 91, 00:23:40.652 "qid": 0, 00:23:40.652 "state": "enabled", 00:23:40.652 "listen_address": { 00:23:40.652 "trtype": "RDMA", 00:23:40.652 "adrfam": "IPv4", 00:23:40.652 "traddr": "192.168.100.8", 00:23:40.652 "trsvcid": "4420" 00:23:40.652 }, 00:23:40.652 "peer_address": { 00:23:40.652 "trtype": "RDMA", 00:23:40.652 "adrfam": "IPv4", 00:23:40.652 "traddr": "192.168.100.8", 00:23:40.652 "trsvcid": "39263" 00:23:40.652 }, 00:23:40.652 "auth": { 00:23:40.652 "state": "completed", 00:23:40.652 "digest": "sha384", 00:23:40.652 "dhgroup": "ffdhe8192" 00:23:40.652 } 00:23:40.652 } 00:23:40.652 ]' 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:40.652 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.912 13:51:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:41.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.855 13:51:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.427 00:23:42.427 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:42.427 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.427 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:42.687 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.687 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.687 13:51:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.687 13:51:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.687 13:51:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.687 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:42.687 { 00:23:42.687 "cntlid": 93, 00:23:42.687 "qid": 0, 00:23:42.687 "state": "enabled", 00:23:42.687 "listen_address": { 00:23:42.687 "trtype": "RDMA", 00:23:42.687 "adrfam": "IPv4", 00:23:42.687 "traddr": "192.168.100.8", 00:23:42.687 "trsvcid": "4420" 00:23:42.687 }, 00:23:42.687 "peer_address": { 00:23:42.687 "trtype": "RDMA", 00:23:42.687 "adrfam": "IPv4", 00:23:42.687 "traddr": "192.168.100.8", 00:23:42.687 "trsvcid": "47837" 00:23:42.687 }, 00:23:42.687 "auth": { 00:23:42.687 "state": "completed", 00:23:42.687 "digest": "sha384", 00:23:42.687 "dhgroup": "ffdhe8192" 00:23:42.687 } 00:23:42.687 } 00:23:42.687 ]' 00:23:42.687 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:42.687 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:42.687 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:42.687 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:42.687 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:42.947 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.947 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.947 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.947 13:51:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:23:43.891 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:43.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:43.891 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:43.891 13:51:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:43.891 13:51:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.891 13:51:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:43.891 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:43.891 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:43.891 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:44.152 13:51:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:44.724 00:23:44.724 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:44.724 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:44.724 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.724 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.724 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.724 13:51:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.724 13:51:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.724 13:51:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.724 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:44.724 { 00:23:44.724 "cntlid": 95, 00:23:44.724 "qid": 0, 00:23:44.724 "state": "enabled", 00:23:44.724 "listen_address": { 00:23:44.724 "trtype": "RDMA", 00:23:44.724 "adrfam": "IPv4", 00:23:44.724 "traddr": "192.168.100.8", 00:23:44.724 "trsvcid": "4420" 00:23:44.724 }, 00:23:44.724 "peer_address": { 00:23:44.724 "trtype": "RDMA", 00:23:44.724 "adrfam": "IPv4", 00:23:44.724 "traddr": "192.168.100.8", 00:23:44.724 "trsvcid": "33289" 00:23:44.724 }, 00:23:44.724 "auth": { 00:23:44.724 "state": "completed", 00:23:44.724 "digest": "sha384", 00:23:44.724 "dhgroup": "ffdhe8192" 00:23:44.724 } 00:23:44.724 } 00:23:44.724 ]' 00:23:44.724 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:44.985 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:44.985 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:44.985 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:44.985 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:44.985 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.985 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.985 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:45.247 13:51:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:23:45.819 13:51:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:46.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:46.079 13:51:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:46.079 13:51:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.079 13:51:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.080 13:51:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.080 13:51:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:23:46.080 13:51:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.080 13:51:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:46.080 13:51:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:46.080 13:51:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:46.340 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:23:46.340 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:46.340 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:46.340 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:46.340 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:46.340 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:46.340 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.340 13:51:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.340 13:51:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.340 13:51:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.341 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.341 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.341 00:23:46.341 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:46.341 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:46.341 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.601 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.601 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.601 13:51:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:46.601 13:51:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.601 13:51:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:46.601 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:46.601 { 00:23:46.601 "cntlid": 97, 00:23:46.601 "qid": 0, 00:23:46.601 "state": "enabled", 00:23:46.601 "listen_address": { 00:23:46.601 "trtype": "RDMA", 00:23:46.601 "adrfam": "IPv4", 00:23:46.601 "traddr": "192.168.100.8", 00:23:46.601 "trsvcid": "4420" 00:23:46.601 }, 00:23:46.601 "peer_address": { 00:23:46.601 "trtype": "RDMA", 00:23:46.601 "adrfam": "IPv4", 00:23:46.601 "traddr": "192.168.100.8", 00:23:46.601 "trsvcid": "42172" 00:23:46.601 }, 00:23:46.601 "auth": { 00:23:46.601 "state": "completed", 00:23:46.601 "digest": "sha512", 00:23:46.601 "dhgroup": "null" 00:23:46.601 } 00:23:46.601 } 00:23:46.601 ]' 00:23:46.601 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:46.601 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:46.601 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:46.601 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:46.601 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:46.863 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.863 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.863 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.863 13:51:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:23:47.805 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.805 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:47.805 13:51:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.805 13:51:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.805 13:51:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.805 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:47.805 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:47.805 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.067 13:51:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.327 00:23:48.327 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:48.327 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:48.327 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.328 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.328 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:48.328 13:51:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.328 13:51:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.328 13:51:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.328 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:48.328 { 00:23:48.328 "cntlid": 99, 00:23:48.328 "qid": 0, 00:23:48.328 "state": "enabled", 00:23:48.328 "listen_address": { 00:23:48.328 "trtype": "RDMA", 00:23:48.328 "adrfam": "IPv4", 00:23:48.328 "traddr": "192.168.100.8", 00:23:48.328 "trsvcid": "4420" 00:23:48.328 }, 00:23:48.328 "peer_address": { 00:23:48.328 "trtype": "RDMA", 00:23:48.328 "adrfam": "IPv4", 00:23:48.328 "traddr": "192.168.100.8", 00:23:48.328 "trsvcid": "57012" 00:23:48.328 }, 00:23:48.328 "auth": { 00:23:48.328 "state": "completed", 00:23:48.328 "digest": "sha512", 00:23:48.328 "dhgroup": "null" 00:23:48.328 } 00:23:48.328 } 00:23:48.328 ]' 00:23:48.328 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:48.588 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:48.588 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:48.588 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:48.588 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:48.588 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.588 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.588 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.849 13:51:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:23:49.426 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.689 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:49.689 13:51:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.689 13:51:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.689 13:51:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.689 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:49.689 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:49.690 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.950 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.950 00:23:50.211 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:50.211 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.211 13:51:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:50.211 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.211 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.211 13:51:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.211 13:51:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.211 13:51:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.211 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:50.211 { 00:23:50.211 "cntlid": 101, 00:23:50.211 "qid": 0, 00:23:50.211 "state": "enabled", 00:23:50.211 "listen_address": { 00:23:50.211 "trtype": "RDMA", 00:23:50.211 "adrfam": "IPv4", 00:23:50.211 "traddr": "192.168.100.8", 00:23:50.211 "trsvcid": "4420" 00:23:50.211 }, 00:23:50.211 "peer_address": { 00:23:50.211 "trtype": "RDMA", 00:23:50.211 "adrfam": "IPv4", 00:23:50.211 "traddr": "192.168.100.8", 00:23:50.211 "trsvcid": "47341" 00:23:50.211 }, 00:23:50.211 "auth": { 00:23:50.211 "state": "completed", 00:23:50.211 "digest": "sha512", 00:23:50.211 "dhgroup": "null" 00:23:50.211 } 00:23:50.211 } 00:23:50.211 ]' 00:23:50.211 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:50.211 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:50.211 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:50.211 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:50.211 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:50.473 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:50.473 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.473 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.473 13:51:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:23:51.416 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.416 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:51.416 13:51:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.416 13:51:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.416 13:51:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.416 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:51.416 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:51.416 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:51.676 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:51.937 00:23:51.937 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:51.937 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:51.937 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:51.937 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.937 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:51.937 13:51:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.937 13:51:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.197 13:51:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.197 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:52.197 { 00:23:52.197 "cntlid": 103, 00:23:52.197 "qid": 0, 00:23:52.197 "state": "enabled", 00:23:52.197 "listen_address": { 00:23:52.197 "trtype": "RDMA", 00:23:52.197 "adrfam": "IPv4", 00:23:52.197 "traddr": "192.168.100.8", 00:23:52.197 "trsvcid": "4420" 00:23:52.197 }, 00:23:52.197 "peer_address": { 00:23:52.197 "trtype": "RDMA", 00:23:52.197 "adrfam": "IPv4", 00:23:52.197 "traddr": "192.168.100.8", 00:23:52.197 "trsvcid": "33973" 00:23:52.197 }, 00:23:52.197 "auth": { 00:23:52.197 "state": "completed", 00:23:52.197 "digest": "sha512", 00:23:52.197 "dhgroup": "null" 00:23:52.197 } 00:23:52.197 } 00:23:52.197 ]' 00:23:52.197 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:52.197 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:52.197 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:52.197 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:52.197 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:52.197 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.197 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.197 13:51:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.459 13:51:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:23:53.401 13:51:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.401 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.662 00:23:53.662 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:53.662 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:53.662 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.923 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.923 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:53.923 13:51:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:53.923 13:51:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.923 13:51:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:53.923 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:53.923 { 00:23:53.923 "cntlid": 105, 00:23:53.923 "qid": 0, 00:23:53.923 "state": "enabled", 00:23:53.923 "listen_address": { 00:23:53.923 "trtype": "RDMA", 00:23:53.923 "adrfam": "IPv4", 00:23:53.923 "traddr": "192.168.100.8", 00:23:53.924 "trsvcid": "4420" 00:23:53.924 }, 00:23:53.924 "peer_address": { 00:23:53.924 "trtype": "RDMA", 00:23:53.924 "adrfam": "IPv4", 00:23:53.924 "traddr": "192.168.100.8", 00:23:53.924 "trsvcid": "51779" 00:23:53.924 }, 00:23:53.924 "auth": { 00:23:53.924 "state": "completed", 00:23:53.924 "digest": "sha512", 00:23:53.924 "dhgroup": "ffdhe2048" 00:23:53.924 } 00:23:53.924 } 00:23:53.924 ]' 00:23:53.924 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:53.924 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:53.924 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:53.924 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:53.924 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:53.924 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:53.924 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:53.924 13:51:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.185 13:51:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:23:55.126 13:51:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.126 13:51:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:55.126 13:51:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.126 13:51:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.126 13:51:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.126 13:51:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:55.126 13:51:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:55.126 13:51:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.388 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.648 00:23:55.649 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:55.649 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:55.649 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:55.649 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.649 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:55.649 13:51:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.649 13:51:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.649 13:51:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.649 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:55.649 { 00:23:55.649 "cntlid": 107, 00:23:55.649 "qid": 0, 00:23:55.649 "state": "enabled", 00:23:55.649 "listen_address": { 00:23:55.649 "trtype": "RDMA", 00:23:55.649 "adrfam": "IPv4", 00:23:55.649 "traddr": "192.168.100.8", 00:23:55.649 "trsvcid": "4420" 00:23:55.649 }, 00:23:55.649 "peer_address": { 00:23:55.649 "trtype": "RDMA", 00:23:55.649 "adrfam": "IPv4", 00:23:55.649 "traddr": "192.168.100.8", 00:23:55.649 "trsvcid": "40795" 00:23:55.649 }, 00:23:55.649 "auth": { 00:23:55.649 "state": "completed", 00:23:55.649 "digest": "sha512", 00:23:55.649 "dhgroup": "ffdhe2048" 00:23:55.649 } 00:23:55.649 } 00:23:55.649 ]' 00:23:55.649 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:55.910 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:55.910 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:55.910 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:55.910 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:55.910 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:55.910 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:55.910 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.171 13:51:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:23:56.773 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:57.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:57.033 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:57.033 13:51:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.033 13:51:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.033 13:51:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.033 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.034 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.294 13:51:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.294 00:23:57.294 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:57.294 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:57.294 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.554 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.554 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:57.554 13:51:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:57.554 13:51:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.554 13:51:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:57.554 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:57.554 { 00:23:57.554 "cntlid": 109, 00:23:57.554 "qid": 0, 00:23:57.554 "state": "enabled", 00:23:57.554 "listen_address": { 00:23:57.554 "trtype": "RDMA", 00:23:57.554 "adrfam": "IPv4", 00:23:57.554 "traddr": "192.168.100.8", 00:23:57.554 "trsvcid": "4420" 00:23:57.554 }, 00:23:57.554 "peer_address": { 00:23:57.554 "trtype": "RDMA", 00:23:57.554 "adrfam": "IPv4", 00:23:57.554 "traddr": "192.168.100.8", 00:23:57.554 "trsvcid": "44796" 00:23:57.554 }, 00:23:57.554 "auth": { 00:23:57.554 "state": "completed", 00:23:57.554 "digest": "sha512", 00:23:57.554 "dhgroup": "ffdhe2048" 00:23:57.554 } 00:23:57.554 } 00:23:57.554 ]' 00:23:57.554 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:57.554 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:57.554 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:57.554 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:57.554 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:57.815 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:57.815 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:57.815 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:57.815 13:51:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:23:58.755 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:58.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:58.755 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:58.755 13:51:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:58.755 13:51:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.755 13:51:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:58.755 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:58.755 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:58.755 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:59.015 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:23:59.015 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:59.015 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:59.016 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:59.016 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:59.016 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:59.016 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:59.016 13:51:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.016 13:51:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.016 13:51:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.016 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:59.016 13:51:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:59.276 00:23:59.276 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:59.276 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:59.276 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.276 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.276 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:59.276 13:51:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:59.276 13:51:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.276 13:51:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:59.276 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:59.276 { 00:23:59.276 "cntlid": 111, 00:23:59.276 "qid": 0, 00:23:59.276 "state": "enabled", 00:23:59.276 "listen_address": { 00:23:59.276 "trtype": "RDMA", 00:23:59.276 "adrfam": "IPv4", 00:23:59.276 "traddr": "192.168.100.8", 00:23:59.276 "trsvcid": "4420" 00:23:59.276 }, 00:23:59.276 "peer_address": { 00:23:59.276 "trtype": "RDMA", 00:23:59.276 "adrfam": "IPv4", 00:23:59.276 "traddr": "192.168.100.8", 00:23:59.276 "trsvcid": "51041" 00:23:59.276 }, 00:23:59.276 "auth": { 00:23:59.276 "state": "completed", 00:23:59.276 "digest": "sha512", 00:23:59.276 "dhgroup": "ffdhe2048" 00:23:59.276 } 00:23:59.276 } 00:23:59.276 ]' 00:23:59.276 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:59.537 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:59.537 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:59.537 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:59.537 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:59.537 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:59.537 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:59.537 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:59.797 13:51:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:24:00.737 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.737 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:00.737 13:51:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:00.737 13:51:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:00.738 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:00.998 00:24:00.998 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:00.998 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.998 13:51:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:01.259 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.259 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:01.259 13:51:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.259 13:51:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.259 13:51:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.259 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:01.259 { 00:24:01.259 "cntlid": 113, 00:24:01.259 "qid": 0, 00:24:01.259 "state": "enabled", 00:24:01.259 "listen_address": { 00:24:01.259 "trtype": "RDMA", 00:24:01.259 "adrfam": "IPv4", 00:24:01.259 "traddr": "192.168.100.8", 00:24:01.259 "trsvcid": "4420" 00:24:01.259 }, 00:24:01.259 "peer_address": { 00:24:01.259 "trtype": "RDMA", 00:24:01.259 "adrfam": "IPv4", 00:24:01.259 "traddr": "192.168.100.8", 00:24:01.259 "trsvcid": "56234" 00:24:01.259 }, 00:24:01.259 "auth": { 00:24:01.259 "state": "completed", 00:24:01.259 "digest": "sha512", 00:24:01.259 "dhgroup": "ffdhe3072" 00:24:01.259 } 00:24:01.259 } 00:24:01.259 ]' 00:24:01.259 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:01.259 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:01.259 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:01.259 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:01.259 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:01.519 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:01.519 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.519 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:01.520 13:51:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:24:02.461 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:02.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:02.461 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:02.461 13:51:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.461 13:51:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.461 13:51:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.461 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:02.461 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:02.461 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.721 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.981 00:24:02.981 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:02.981 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:02.981 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:02.981 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.981 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:02.981 13:51:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:02.981 13:51:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.981 13:51:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:02.981 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:02.981 { 00:24:02.981 "cntlid": 115, 00:24:02.981 "qid": 0, 00:24:02.981 "state": "enabled", 00:24:02.981 "listen_address": { 00:24:02.981 "trtype": "RDMA", 00:24:02.981 "adrfam": "IPv4", 00:24:02.981 "traddr": "192.168.100.8", 00:24:02.981 "trsvcid": "4420" 00:24:02.981 }, 00:24:02.981 "peer_address": { 00:24:02.981 "trtype": "RDMA", 00:24:02.981 "adrfam": "IPv4", 00:24:02.981 "traddr": "192.168.100.8", 00:24:02.981 "trsvcid": "54139" 00:24:02.981 }, 00:24:02.981 "auth": { 00:24:02.981 "state": "completed", 00:24:02.981 "digest": "sha512", 00:24:02.981 "dhgroup": "ffdhe3072" 00:24:02.981 } 00:24:02.981 } 00:24:02.981 ]' 00:24:02.981 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:03.242 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:03.242 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:03.242 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:03.242 13:51:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:03.242 13:51:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:03.242 13:51:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.242 13:51:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:03.502 13:51:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:24:04.442 13:51:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:04.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.442 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.702 00:24:04.702 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:04.702 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:04.702 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:04.963 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.963 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:04.963 13:51:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:04.963 13:51:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.963 13:51:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:04.963 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:04.963 { 00:24:04.963 "cntlid": 117, 00:24:04.963 "qid": 0, 00:24:04.963 "state": "enabled", 00:24:04.963 "listen_address": { 00:24:04.963 "trtype": "RDMA", 00:24:04.963 "adrfam": "IPv4", 00:24:04.963 "traddr": "192.168.100.8", 00:24:04.963 "trsvcid": "4420" 00:24:04.963 }, 00:24:04.963 "peer_address": { 00:24:04.963 "trtype": "RDMA", 00:24:04.963 "adrfam": "IPv4", 00:24:04.963 "traddr": "192.168.100.8", 00:24:04.963 "trsvcid": "54985" 00:24:04.963 }, 00:24:04.963 "auth": { 00:24:04.963 "state": "completed", 00:24:04.963 "digest": "sha512", 00:24:04.963 "dhgroup": "ffdhe3072" 00:24:04.963 } 00:24:04.963 } 00:24:04.963 ]' 00:24:04.963 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:04.963 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:04.963 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:04.963 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:04.963 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:05.223 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:05.223 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:05.223 13:51:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:05.223 13:51:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:24:06.170 13:51:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:06.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:06.170 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:06.170 13:51:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.170 13:51:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.170 13:51:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.170 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:06.170 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:06.170 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:06.431 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:06.691 00:24:06.691 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:06.691 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:06.691 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:06.951 { 00:24:06.951 "cntlid": 119, 00:24:06.951 "qid": 0, 00:24:06.951 "state": "enabled", 00:24:06.951 "listen_address": { 00:24:06.951 "trtype": "RDMA", 00:24:06.951 "adrfam": "IPv4", 00:24:06.951 "traddr": "192.168.100.8", 00:24:06.951 "trsvcid": "4420" 00:24:06.951 }, 00:24:06.951 "peer_address": { 00:24:06.951 "trtype": "RDMA", 00:24:06.951 "adrfam": "IPv4", 00:24:06.951 "traddr": "192.168.100.8", 00:24:06.951 "trsvcid": "50179" 00:24:06.951 }, 00:24:06.951 "auth": { 00:24:06.951 "state": "completed", 00:24:06.951 "digest": "sha512", 00:24:06.951 "dhgroup": "ffdhe3072" 00:24:06.951 } 00:24:06.951 } 00:24:06.951 ]' 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.951 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:07.211 13:51:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:24:08.151 13:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:08.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:08.151 13:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:08.151 13:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.151 13:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.151 13:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.151 13:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.151 13:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:08.151 13:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:08.151 13:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.412 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.673 00:24:08.673 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:08.673 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:08.673 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:08.673 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.673 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:08.673 13:52:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:08.673 13:52:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.673 13:52:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:08.673 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:08.673 { 00:24:08.673 "cntlid": 121, 00:24:08.674 "qid": 0, 00:24:08.674 "state": "enabled", 00:24:08.674 "listen_address": { 00:24:08.674 "trtype": "RDMA", 00:24:08.674 "adrfam": "IPv4", 00:24:08.674 "traddr": "192.168.100.8", 00:24:08.674 "trsvcid": "4420" 00:24:08.674 }, 00:24:08.674 "peer_address": { 00:24:08.674 "trtype": "RDMA", 00:24:08.674 "adrfam": "IPv4", 00:24:08.674 "traddr": "192.168.100.8", 00:24:08.674 "trsvcid": "43386" 00:24:08.674 }, 00:24:08.674 "auth": { 00:24:08.674 "state": "completed", 00:24:08.674 "digest": "sha512", 00:24:08.674 "dhgroup": "ffdhe4096" 00:24:08.674 } 00:24:08.674 } 00:24:08.674 ]' 00:24:08.674 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:08.934 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:08.934 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:08.934 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:08.934 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:08.934 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.934 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.934 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:09.195 13:52:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:24:09.765 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:10.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:10.027 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:10.027 13:52:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.027 13:52:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.027 13:52:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.027 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:10.027 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:10.027 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:10.287 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:24:10.287 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:10.287 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:10.288 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:10.288 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:10.288 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:10.288 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.288 13:52:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.288 13:52:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.288 13:52:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.288 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.288 13:52:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.548 00:24:10.548 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:10.548 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:10.548 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.548 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.548 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:10.548 13:52:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:10.548 13:52:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.548 13:52:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:10.548 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:10.548 { 00:24:10.548 "cntlid": 123, 00:24:10.548 "qid": 0, 00:24:10.548 "state": "enabled", 00:24:10.548 "listen_address": { 00:24:10.548 "trtype": "RDMA", 00:24:10.548 "adrfam": "IPv4", 00:24:10.548 "traddr": "192.168.100.8", 00:24:10.548 "trsvcid": "4420" 00:24:10.548 }, 00:24:10.548 "peer_address": { 00:24:10.548 "trtype": "RDMA", 00:24:10.548 "adrfam": "IPv4", 00:24:10.548 "traddr": "192.168.100.8", 00:24:10.548 "trsvcid": "50074" 00:24:10.548 }, 00:24:10.548 "auth": { 00:24:10.548 "state": "completed", 00:24:10.548 "digest": "sha512", 00:24:10.548 "dhgroup": "ffdhe4096" 00:24:10.548 } 00:24:10.548 } 00:24:10.548 ]' 00:24:10.548 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:10.809 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:10.809 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:10.809 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:10.809 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:10.809 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:10.809 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:10.809 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:11.070 13:52:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:24:11.641 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:11.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:11.902 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:11.902 13:52:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.902 13:52:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.902 13:52:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.902 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:11.902 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:11.902 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:12.162 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:24:12.162 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:12.162 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:12.162 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:12.162 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:12.162 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:12.162 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.163 13:52:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.163 13:52:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.163 13:52:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.163 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.163 13:52:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.423 00:24:12.423 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:12.423 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:12.423 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:12.423 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.423 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:12.423 13:52:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:12.423 13:52:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.423 13:52:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:12.423 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:12.423 { 00:24:12.423 "cntlid": 125, 00:24:12.423 "qid": 0, 00:24:12.423 "state": "enabled", 00:24:12.423 "listen_address": { 00:24:12.423 "trtype": "RDMA", 00:24:12.423 "adrfam": "IPv4", 00:24:12.423 "traddr": "192.168.100.8", 00:24:12.423 "trsvcid": "4420" 00:24:12.423 }, 00:24:12.423 "peer_address": { 00:24:12.423 "trtype": "RDMA", 00:24:12.423 "adrfam": "IPv4", 00:24:12.423 "traddr": "192.168.100.8", 00:24:12.423 "trsvcid": "46794" 00:24:12.423 }, 00:24:12.423 "auth": { 00:24:12.423 "state": "completed", 00:24:12.423 "digest": "sha512", 00:24:12.423 "dhgroup": "ffdhe4096" 00:24:12.423 } 00:24:12.423 } 00:24:12.423 ]' 00:24:12.423 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:12.684 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:12.684 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:12.684 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:12.684 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:12.684 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:12.684 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:12.684 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.944 13:52:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:24:13.516 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:13.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:13.777 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:13.777 13:52:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:13.777 13:52:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.777 13:52:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:13.777 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:13.777 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:13.777 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:14.038 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:24:14.038 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:14.038 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:14.038 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:14.038 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:14.038 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:14.038 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:14.038 13:52:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.038 13:52:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.038 13:52:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.038 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:14.039 13:52:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:14.300 00:24:14.300 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:14.300 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:14.300 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:14.300 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.300 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:14.300 13:52:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:14.300 13:52:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.300 13:52:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:14.300 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:14.300 { 00:24:14.300 "cntlid": 127, 00:24:14.300 "qid": 0, 00:24:14.300 "state": "enabled", 00:24:14.300 "listen_address": { 00:24:14.300 "trtype": "RDMA", 00:24:14.300 "adrfam": "IPv4", 00:24:14.300 "traddr": "192.168.100.8", 00:24:14.300 "trsvcid": "4420" 00:24:14.300 }, 00:24:14.300 "peer_address": { 00:24:14.300 "trtype": "RDMA", 00:24:14.300 "adrfam": "IPv4", 00:24:14.300 "traddr": "192.168.100.8", 00:24:14.300 "trsvcid": "55786" 00:24:14.300 }, 00:24:14.300 "auth": { 00:24:14.300 "state": "completed", 00:24:14.300 "digest": "sha512", 00:24:14.300 "dhgroup": "ffdhe4096" 00:24:14.300 } 00:24:14.300 } 00:24:14.300 ]' 00:24:14.560 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:14.560 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:14.560 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:14.560 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:14.560 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:14.560 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:14.560 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:14.560 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.820 13:52:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:24:15.392 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:15.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:15.653 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:15.653 13:52:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.653 13:52:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.653 13:52:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.653 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.653 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:15.653 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:15.653 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:15.914 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:24:15.914 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:15.914 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:15.914 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:15.914 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:15.914 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:15.914 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.914 13:52:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.914 13:52:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.914 13:52:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.915 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.915 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.175 00:24:16.175 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:16.175 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:16.175 13:52:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:16.436 { 00:24:16.436 "cntlid": 129, 00:24:16.436 "qid": 0, 00:24:16.436 "state": "enabled", 00:24:16.436 "listen_address": { 00:24:16.436 "trtype": "RDMA", 00:24:16.436 "adrfam": "IPv4", 00:24:16.436 "traddr": "192.168.100.8", 00:24:16.436 "trsvcid": "4420" 00:24:16.436 }, 00:24:16.436 "peer_address": { 00:24:16.436 "trtype": "RDMA", 00:24:16.436 "adrfam": "IPv4", 00:24:16.436 "traddr": "192.168.100.8", 00:24:16.436 "trsvcid": "41115" 00:24:16.436 }, 00:24:16.436 "auth": { 00:24:16.436 "state": "completed", 00:24:16.436 "digest": "sha512", 00:24:16.436 "dhgroup": "ffdhe6144" 00:24:16.436 } 00:24:16.436 } 00:24:16.436 ]' 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:16.436 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:16.697 13:52:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:17.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:17.639 13:52:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.899 13:52:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:17.899 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.899 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.159 00:24:18.159 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:18.159 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:18.159 13:52:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:18.421 { 00:24:18.421 "cntlid": 131, 00:24:18.421 "qid": 0, 00:24:18.421 "state": "enabled", 00:24:18.421 "listen_address": { 00:24:18.421 "trtype": "RDMA", 00:24:18.421 "adrfam": "IPv4", 00:24:18.421 "traddr": "192.168.100.8", 00:24:18.421 "trsvcid": "4420" 00:24:18.421 }, 00:24:18.421 "peer_address": { 00:24:18.421 "trtype": "RDMA", 00:24:18.421 "adrfam": "IPv4", 00:24:18.421 "traddr": "192.168.100.8", 00:24:18.421 "trsvcid": "38311" 00:24:18.421 }, 00:24:18.421 "auth": { 00:24:18.421 "state": "completed", 00:24:18.421 "digest": "sha512", 00:24:18.421 "dhgroup": "ffdhe6144" 00:24:18.421 } 00:24:18.421 } 00:24:18.421 ]' 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:18.421 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:18.683 13:52:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:19.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.623 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.195 00:24:20.195 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:20.195 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:20.195 13:52:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.195 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.195 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:20.195 13:52:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.195 13:52:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.195 13:52:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.195 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:20.195 { 00:24:20.195 "cntlid": 133, 00:24:20.195 "qid": 0, 00:24:20.195 "state": "enabled", 00:24:20.195 "listen_address": { 00:24:20.195 "trtype": "RDMA", 00:24:20.195 "adrfam": "IPv4", 00:24:20.195 "traddr": "192.168.100.8", 00:24:20.195 "trsvcid": "4420" 00:24:20.195 }, 00:24:20.195 "peer_address": { 00:24:20.195 "trtype": "RDMA", 00:24:20.195 "adrfam": "IPv4", 00:24:20.195 "traddr": "192.168.100.8", 00:24:20.195 "trsvcid": "60758" 00:24:20.195 }, 00:24:20.195 "auth": { 00:24:20.195 "state": "completed", 00:24:20.195 "digest": "sha512", 00:24:20.195 "dhgroup": "ffdhe6144" 00:24:20.195 } 00:24:20.195 } 00:24:20.195 ]' 00:24:20.195 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:20.195 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:20.195 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:20.491 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:20.491 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:20.491 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:20.491 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:20.491 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:20.491 13:52:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:24:21.461 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:21.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:21.461 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:21.461 13:52:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.461 13:52:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.461 13:52:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.461 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:21.461 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:21.461 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:21.722 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:21.982 00:24:21.982 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:21.982 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:21.982 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.242 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.243 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:22.243 13:52:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:22.243 13:52:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.243 13:52:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:22.243 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:22.243 { 00:24:22.243 "cntlid": 135, 00:24:22.243 "qid": 0, 00:24:22.243 "state": "enabled", 00:24:22.243 "listen_address": { 00:24:22.243 "trtype": "RDMA", 00:24:22.243 "adrfam": "IPv4", 00:24:22.243 "traddr": "192.168.100.8", 00:24:22.243 "trsvcid": "4420" 00:24:22.243 }, 00:24:22.243 "peer_address": { 00:24:22.243 "trtype": "RDMA", 00:24:22.243 "adrfam": "IPv4", 00:24:22.243 "traddr": "192.168.100.8", 00:24:22.243 "trsvcid": "46726" 00:24:22.243 }, 00:24:22.243 "auth": { 00:24:22.243 "state": "completed", 00:24:22.243 "digest": "sha512", 00:24:22.243 "dhgroup": "ffdhe6144" 00:24:22.243 } 00:24:22.243 } 00:24:22.243 ]' 00:24:22.243 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:22.243 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:22.243 13:52:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:22.243 13:52:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:22.243 13:52:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:22.243 13:52:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:22.243 13:52:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:22.243 13:52:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:22.504 13:52:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:24:23.447 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:23.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:23.447 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:23.447 13:52:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.447 13:52:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.447 13:52:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.447 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.447 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:23.447 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:23.447 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:23.707 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:24:23.707 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:23.707 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:23.707 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:23.707 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:23.707 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:23.708 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.708 13:52:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:23.708 13:52:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.708 13:52:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:23.708 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.708 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:24.279 00:24:24.279 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:24.279 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:24.279 13:52:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:24.279 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.279 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:24.279 13:52:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:24.279 13:52:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.279 13:52:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:24.279 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:24.279 { 00:24:24.279 "cntlid": 137, 00:24:24.279 "qid": 0, 00:24:24.279 "state": "enabled", 00:24:24.279 "listen_address": { 00:24:24.279 "trtype": "RDMA", 00:24:24.279 "adrfam": "IPv4", 00:24:24.279 "traddr": "192.168.100.8", 00:24:24.279 "trsvcid": "4420" 00:24:24.279 }, 00:24:24.279 "peer_address": { 00:24:24.279 "trtype": "RDMA", 00:24:24.279 "adrfam": "IPv4", 00:24:24.279 "traddr": "192.168.100.8", 00:24:24.279 "trsvcid": "57677" 00:24:24.279 }, 00:24:24.279 "auth": { 00:24:24.279 "state": "completed", 00:24:24.279 "digest": "sha512", 00:24:24.279 "dhgroup": "ffdhe8192" 00:24:24.279 } 00:24:24.279 } 00:24:24.279 ]' 00:24:24.279 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:24.279 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:24.279 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:24.279 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:24.279 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:24.539 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:24.539 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:24.539 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:24.540 13:52:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:24:25.482 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:25.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:25.482 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:25.482 13:52:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.482 13:52:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.482 13:52:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.482 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:25.482 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:25.482 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.744 13:52:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:26.314 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:26.314 { 00:24:26.314 "cntlid": 139, 00:24:26.314 "qid": 0, 00:24:26.314 "state": "enabled", 00:24:26.314 "listen_address": { 00:24:26.314 "trtype": "RDMA", 00:24:26.314 "adrfam": "IPv4", 00:24:26.314 "traddr": "192.168.100.8", 00:24:26.314 "trsvcid": "4420" 00:24:26.314 }, 00:24:26.314 "peer_address": { 00:24:26.314 "trtype": "RDMA", 00:24:26.314 "adrfam": "IPv4", 00:24:26.314 "traddr": "192.168.100.8", 00:24:26.314 "trsvcid": "60829" 00:24:26.314 }, 00:24:26.314 "auth": { 00:24:26.314 "state": "completed", 00:24:26.314 "digest": "sha512", 00:24:26.314 "dhgroup": "ffdhe8192" 00:24:26.314 } 00:24:26.314 } 00:24:26.314 ]' 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:26.314 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:26.574 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:26.574 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:26.574 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:26.574 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:26.574 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:26.574 13:52:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:NjQzNGU2YWZlMTIzZTk3ZjM1NjMxNTIzYzAzOGI3Y2Usp41V: --dhchap-ctrl-secret DHHC-1:02:OTg5OTZkMzcwZjMyOGIwMjZlYTYyNTMzYWY3NzE1ZGNjNWNhYjczY2NmMTJhY2Y5Op7o2Q==: 00:24:27.517 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:27.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.778 13:52:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.347 00:24:28.347 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:28.347 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:28.347 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:28.607 { 00:24:28.607 "cntlid": 141, 00:24:28.607 "qid": 0, 00:24:28.607 "state": "enabled", 00:24:28.607 "listen_address": { 00:24:28.607 "trtype": "RDMA", 00:24:28.607 "adrfam": "IPv4", 00:24:28.607 "traddr": "192.168.100.8", 00:24:28.607 "trsvcid": "4420" 00:24:28.607 }, 00:24:28.607 "peer_address": { 00:24:28.607 "trtype": "RDMA", 00:24:28.607 "adrfam": "IPv4", 00:24:28.607 "traddr": "192.168.100.8", 00:24:28.607 "trsvcid": "39965" 00:24:28.607 }, 00:24:28.607 "auth": { 00:24:28.607 "state": "completed", 00:24:28.607 "digest": "sha512", 00:24:28.607 "dhgroup": "ffdhe8192" 00:24:28.607 } 00:24:28.607 } 00:24:28.607 ]' 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:28.607 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:28.867 13:52:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YjZmYjFmMTUyMjA4NDBjMTMxYjQwMGExMzIyZmFkMWQxNTYyNjgzNzMyOTZjMjBlBJFHvg==: --dhchap-ctrl-secret DHHC-1:01:YWZhMjU0YTExNDMwOGUyYmRiZjFlODQ0M2YwZmY2YjgZW+if: 00:24:29.808 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:29.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:29.808 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:29.808 13:52:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:29.808 13:52:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.808 13:52:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:29.808 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:24:29.808 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.808 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:30.069 13:52:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:30.641 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:30.641 { 00:24:30.641 "cntlid": 143, 00:24:30.641 "qid": 0, 00:24:30.641 "state": "enabled", 00:24:30.641 "listen_address": { 00:24:30.641 "trtype": "RDMA", 00:24:30.641 "adrfam": "IPv4", 00:24:30.641 "traddr": "192.168.100.8", 00:24:30.641 "trsvcid": "4420" 00:24:30.641 }, 00:24:30.641 "peer_address": { 00:24:30.641 "trtype": "RDMA", 00:24:30.641 "adrfam": "IPv4", 00:24:30.641 "traddr": "192.168.100.8", 00:24:30.641 "trsvcid": "40711" 00:24:30.641 }, 00:24:30.641 "auth": { 00:24:30.641 "state": "completed", 00:24:30.641 "digest": "sha512", 00:24:30.641 "dhgroup": "ffdhe8192" 00:24:30.641 } 00:24:30.641 } 00:24:30.641 ]' 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:30.641 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:30.902 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:30.902 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:30.902 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:30.902 13:52:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:24:31.843 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.843 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:31.843 13:52:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.843 13:52:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.843 13:52:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.843 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:24:31.843 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:24:31.843 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:24:31.843 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:31.843 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:31.843 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.104 13:52:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.677 00:24:32.677 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:24:32.677 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:24:32.677 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:24:32.937 { 00:24:32.937 "cntlid": 145, 00:24:32.937 "qid": 0, 00:24:32.937 "state": "enabled", 00:24:32.937 "listen_address": { 00:24:32.937 "trtype": "RDMA", 00:24:32.937 "adrfam": "IPv4", 00:24:32.937 "traddr": "192.168.100.8", 00:24:32.937 "trsvcid": "4420" 00:24:32.937 }, 00:24:32.937 "peer_address": { 00:24:32.937 "trtype": "RDMA", 00:24:32.937 "adrfam": "IPv4", 00:24:32.937 "traddr": "192.168.100.8", 00:24:32.937 "trsvcid": "37582" 00:24:32.937 }, 00:24:32.937 "auth": { 00:24:32.937 "state": "completed", 00:24:32.937 "digest": "sha512", 00:24:32.937 "dhgroup": "ffdhe8192" 00:24:32.937 } 00:24:32.937 } 00:24:32.937 ]' 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:32.937 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:33.198 13:52:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MWJhZjFkNzJmMjZjMTk5MjE4YjZlYTgyZTNmZjAzZjRhYzc3ZmYzOGUyZDQ3YjQ0IoEjig==: --dhchap-ctrl-secret DHHC-1:03:NmVjN2I1MDM0OTBhZTAzNTUzYjcxMmQ2NDgwZDM5YTUwMTZlNjQzYzc2NTNjZWY3YWU1MzMyYjM0ODEwZTgyYjKIc5k=: 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:34.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:34.138 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:24:34.139 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:34.139 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:24:34.139 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:34.139 13:52:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:34.139 13:52:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:06.254 request: 00:25:06.254 { 00:25:06.254 "name": "nvme0", 00:25:06.254 "trtype": "rdma", 00:25:06.254 "traddr": "192.168.100.8", 00:25:06.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:25:06.254 "adrfam": "ipv4", 00:25:06.254 "trsvcid": "4420", 00:25:06.254 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:06.254 "dhchap_key": "key2", 00:25:06.254 "method": "bdev_nvme_attach_controller", 00:25:06.254 "req_id": 1 00:25:06.254 } 00:25:06.254 Got JSON-RPC error response 00:25:06.254 response: 00:25:06.254 { 00:25:06.254 "code": -5, 00:25:06.254 "message": "Input/output error" 00:25:06.254 } 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:06.254 13:52:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:06.254 request: 00:25:06.254 { 00:25:06.254 "name": "nvme0", 00:25:06.254 "trtype": "rdma", 00:25:06.254 "traddr": "192.168.100.8", 00:25:06.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:25:06.254 "adrfam": "ipv4", 00:25:06.254 "trsvcid": "4420", 00:25:06.254 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:06.254 "dhchap_key": "key1", 00:25:06.254 "dhchap_ctrlr_key": "ckey2", 00:25:06.254 "method": "bdev_nvme_attach_controller", 00:25:06.254 "req_id": 1 00:25:06.254 } 00:25:06.254 Got JSON-RPC error response 00:25:06.254 response: 00:25:06.254 { 00:25:06.254 "code": -5, 00:25:06.254 "message": "Input/output error" 00:25:06.254 } 00:25:06.254 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:25:06.254 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:06.254 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:06.254 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:06.254 13:52:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.255 13:52:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.408 request: 00:25:38.408 { 00:25:38.408 "name": "nvme0", 00:25:38.408 "trtype": "rdma", 00:25:38.408 "traddr": "192.168.100.8", 00:25:38.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:25:38.408 "adrfam": "ipv4", 00:25:38.408 "trsvcid": "4420", 00:25:38.408 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:38.408 "dhchap_key": "key1", 00:25:38.408 "dhchap_ctrlr_key": "ckey1", 00:25:38.408 "method": "bdev_nvme_attach_controller", 00:25:38.408 "req_id": 1 00:25:38.408 } 00:25:38.408 Got JSON-RPC error response 00:25:38.408 response: 00:25:38.408 { 00:25:38.408 "code": -5, 00:25:38.408 "message": "Input/output error" 00:25:38.408 } 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2150219 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2150219 ']' 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2150219 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2150219 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2150219' 00:25:38.408 killing process with pid 2150219 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2150219 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2150219 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2191223 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2191223 00:25:38.408 13:53:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:25:38.409 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2191223 ']' 00:25:38.409 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.409 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:38.409 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.409 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:38.409 13:53:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2191223 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2191223 ']' 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.409 13:53:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:38.409 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:25:38.409 { 00:25:38.409 "cntlid": 1, 00:25:38.409 "qid": 0, 00:25:38.409 "state": "enabled", 00:25:38.409 "listen_address": { 00:25:38.409 "trtype": "RDMA", 00:25:38.409 "adrfam": "IPv4", 00:25:38.409 "traddr": "192.168.100.8", 00:25:38.409 "trsvcid": "4420" 00:25:38.409 }, 00:25:38.409 "peer_address": { 00:25:38.409 "trtype": "RDMA", 00:25:38.409 "adrfam": "IPv4", 00:25:38.409 "traddr": "192.168.100.8", 00:25:38.409 "trsvcid": "43563" 00:25:38.409 }, 00:25:38.409 "auth": { 00:25:38.409 "state": "completed", 00:25:38.409 "digest": "sha512", 00:25:38.409 "dhgroup": "ffdhe8192" 00:25:38.409 } 00:25:38.409 } 00:25:38.409 ]' 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:38.409 13:53:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:38.409 13:53:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:MmU2NzhjZmE4ZDUwYTQ0MzIzOWI2MzIxNTkzMWQ0NGExNzI4NWQyMmVmY2VlYWNiMzMyMTdlYmNhNjVmODE4YVNN7eg=: 00:25:39.351 13:53:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:39.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:39.351 13:53:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:11.450 request: 00:26:11.450 { 00:26:11.450 "name": "nvme0", 00:26:11.450 "trtype": "rdma", 00:26:11.450 "traddr": "192.168.100.8", 00:26:11.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:26:11.450 "adrfam": "ipv4", 00:26:11.450 "trsvcid": "4420", 00:26:11.450 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:11.450 "dhchap_key": "key3", 00:26:11.450 "method": "bdev_nvme_attach_controller", 00:26:11.450 "req_id": 1 00:26:11.450 } 00:26:11.450 Got JSON-RPC error response 00:26:11.450 response: 00:26:11.450 { 00:26:11.450 "code": -5, 00:26:11.450 "message": "Input/output error" 00:26:11.450 } 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:11.450 13:54:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:26:43.556 request: 00:26:43.556 { 00:26:43.556 "name": "nvme0", 00:26:43.556 "trtype": "rdma", 00:26:43.556 "traddr": "192.168.100.8", 00:26:43.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:26:43.556 "adrfam": "ipv4", 00:26:43.556 "trsvcid": "4420", 00:26:43.556 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:43.556 "dhchap_key": "key3", 00:26:43.556 "method": "bdev_nvme_attach_controller", 00:26:43.556 "req_id": 1 00:26:43.556 } 00:26:43.556 Got JSON-RPC error response 00:26:43.556 response: 00:26:43.556 { 00:26:43.556 "code": -5, 00:26:43.556 "message": "Input/output error" 00:26:43.556 } 00:26:43.556 13:54:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:26:43.556 13:54:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:43.556 13:54:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:43.556 13:54:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:43.556 13:54:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:26:43.556 13:54:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:26:43.556 13:54:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:26:43.556 13:54:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:43.556 13:54:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:43.556 13:54:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:26:43.556 request: 00:26:43.556 { 00:26:43.556 "name": "nvme0", 00:26:43.556 "trtype": "rdma", 00:26:43.556 "traddr": "192.168.100.8", 00:26:43.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:26:43.556 "adrfam": "ipv4", 00:26:43.556 "trsvcid": "4420", 00:26:43.556 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:26:43.556 "dhchap_key": "key0", 00:26:43.556 "dhchap_ctrlr_key": "key1", 00:26:43.556 "method": "bdev_nvme_attach_controller", 00:26:43.556 "req_id": 1 00:26:43.556 } 00:26:43.556 Got JSON-RPC error response 00:26:43.556 response: 00:26:43.556 { 00:26:43.556 "code": -5, 00:26:43.556 "message": "Input/output error" 00:26:43.556 } 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:43.556 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:26:43.557 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2150562 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2150562 ']' 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2150562 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2150562 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2150562' 00:26:43.557 killing process with pid 2150562 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2150562 00:26:43.557 13:54:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2150562 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:43.557 rmmod nvme_rdma 00:26:43.557 rmmod nvme_fabrics 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2191223 ']' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2191223 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2191223 ']' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2191223 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2191223 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2191223' 00:26:43.557 killing process with pid 2191223 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2191223 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2191223 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.FxQ /tmp/spdk.key-sha256.pFV /tmp/spdk.key-sha384.0Ux /tmp/spdk.key-sha512.QCr /tmp/spdk.key-sha512.DAF /tmp/spdk.key-sha384.4Mc /tmp/spdk.key-sha256.4bz '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:26:43.557 00:26:43.557 real 4m37.380s 00:26:43.557 user 9m49.975s 00:26:43.557 sys 0m17.457s 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:43.557 13:54:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:26:43.557 ************************************ 00:26:43.557 END TEST nvmf_auth_target 00:26:43.557 ************************************ 00:26:43.557 13:54:34 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:26:43.557 13:54:34 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:26:43.557 13:54:34 nvmf_rdma -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:26:43.557 13:54:34 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:43.557 13:54:34 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:43.557 13:54:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:43.557 ************************************ 00:26:43.557 START TEST nvmf_fuzz 00:26:43.557 ************************************ 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:26:43.557 * Looking for test storage... 00:26:43.557 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:43.557 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:43.558 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:43.558 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.558 13:54:34 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:43.558 13:54:34 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.558 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:43.558 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:43.558 13:54:34 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:26:43.558 13:54:34 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:48.852 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.852 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:26:48.852 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:48.852 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:26:48.853 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:26:48.853 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:26:48.853 Found net devices under 0000:98:00.0: mlx_0_0 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:26:48.853 Found net devices under 0000:98:00.1: mlx_0_1 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@420 -- # rdma_device_init 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # uname 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:48.853 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:48.853 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:26:48.853 altname enp152s0f0np0 00:26:48.853 altname ens817f0np0 00:26:48.853 inet 192.168.100.8/24 scope global mlx_0_0 00:26:48.853 valid_lft forever preferred_lft forever 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:48.853 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:48.853 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:26:48.853 altname enp152s0f1np1 00:26:48.853 altname ens817f1np1 00:26:48.853 inet 192.168.100.9/24 scope global mlx_0_1 00:26:48.853 valid_lft forever preferred_lft forever 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:48.853 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:48.854 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:48.854 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:48.854 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:48.854 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:48.854 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:49.115 192.168.100.9' 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:49.115 192.168.100.9' 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # head -n 1 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:49.115 192.168.100.9' 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # tail -n +2 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # head -n 1 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2208187 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2208187 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@830 -- # '[' -z 2208187 ']' 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:49.115 13:54:41 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@863 -- # return 0 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:50.059 Malloc0 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:26:50.059 13:54:42 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:27:22.163 Fuzzing completed. Shutting down the fuzz application 00:27:22.163 00:27:22.163 Dumping successful admin opcodes: 00:27:22.163 8, 9, 10, 24, 00:27:22.163 Dumping successful io opcodes: 00:27:22.163 0, 9, 00:27:22.163 NS: 0x200003af1f00 I/O qp, Total commands completed: 1324175, total successful commands: 7797, random_seed: 428023808 00:27:22.163 NS: 0x200003af1f00 admin qp, Total commands completed: 189278, total successful commands: 1519, random_seed: 658798336 00:27:22.163 13:55:13 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:22.163 Fuzzing completed. Shutting down the fuzz application 00:27:22.163 00:27:22.163 Dumping successful admin opcodes: 00:27:22.163 24, 00:27:22.163 Dumping successful io opcodes: 00:27:22.163 00:27:22.163 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1722486609 00:27:22.163 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1722563469 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:22.163 rmmod nvme_rdma 00:27:22.163 rmmod nvme_fabrics 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2208187 ']' 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2208187 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@949 -- # '[' -z 2208187 ']' 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@953 -- # kill -0 2208187 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@954 -- # uname 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2208187 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2208187' 00:27:22.163 killing process with pid 2208187 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@968 -- # kill 2208187 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@973 -- # wait 2208187 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:27:22.163 00:27:22.163 real 0m40.357s 00:27:22.163 user 0m53.615s 00:27:22.163 sys 0m17.534s 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:22.163 13:55:14 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:22.163 ************************************ 00:27:22.163 END TEST nvmf_fuzz 00:27:22.163 ************************************ 00:27:22.163 13:55:14 nvmf_rdma -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:27:22.163 13:55:14 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:22.163 13:55:14 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:22.163 13:55:14 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:22.163 ************************************ 00:27:22.163 START TEST nvmf_multiconnection 00:27:22.163 ************************************ 00:27:22.163 13:55:14 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:27:22.163 * Looking for test storage... 00:27:22.163 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.163 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:27:22.424 13:55:15 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:27:29.069 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:27:29.069 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:27:29.069 Found net devices under 0000:98:00.0: mlx_0_0 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:27:29.069 Found net devices under 0000:98:00.1: mlx_0_1 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.069 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@420 -- # rdma_device_init 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # uname 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:29.070 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:29.335 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:27:29.336 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:29.336 13:55:21 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:29.336 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:29.336 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:27:29.336 altname enp152s0f0np0 00:27:29.336 altname ens817f0np0 00:27:29.336 inet 192.168.100.8/24 scope global mlx_0_0 00:27:29.336 valid_lft forever preferred_lft forever 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:29.336 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:29.336 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:27:29.336 altname enp152s0f1np1 00:27:29.336 altname ens817f1np1 00:27:29.336 inet 192.168.100.9/24 scope global mlx_0_1 00:27:29.336 valid_lft forever preferred_lft forever 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:29.336 192.168.100.9' 00:27:29.336 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:29.337 192.168.100.9' 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # head -n 1 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:29.337 192.168.100.9' 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # tail -n +2 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # head -n 1 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2218263 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2218263 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@830 -- # '[' -z 2218263 ']' 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:29.337 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.337 [2024-06-11 13:55:22.199386] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:27:29.337 [2024-06-11 13:55:22.199442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.337 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.599 [2024-06-11 13:55:22.261074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.599 [2024-06-11 13:55:22.327571] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.599 [2024-06-11 13:55:22.327611] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.599 [2024-06-11 13:55:22.327618] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.599 [2024-06-11 13:55:22.327625] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.599 [2024-06-11 13:55:22.327631] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.599 [2024-06-11 13:55:22.327766] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.599 [2024-06-11 13:55:22.327879] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.599 [2024-06-11 13:55:22.328046] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.599 [2024-06-11 13:55:22.328047] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.174 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:30.174 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@863 -- # return 0 00:27:30.174 13:55:22 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:30.174 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:30.174 13:55:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.175 13:55:23 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.175 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:30.175 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.176 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.176 [2024-06-11 13:55:23.055367] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd73e90/0xd78380) succeed. 00:27:30.176 [2024-06-11 13:55:23.069886] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd754d0/0xdb9a10) succeed. 00:27:30.440 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.440 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:27:30.440 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.440 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:30.440 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.440 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.440 Malloc1 00:27:30.440 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.440 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:30.440 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.440 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.440 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 [2024-06-11 13:55:23.253070] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 Malloc2 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 Malloc3 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.441 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 Malloc4 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 Malloc5 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 Malloc6 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.701 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 Malloc7 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 Malloc8 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.702 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.963 Malloc9 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.963 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 Malloc10 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 Malloc11 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.964 13:55:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:27:32.347 13:55:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:32.347 13:55:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:32.347 13:55:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:32.347 13:55:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:32.347 13:55:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:34.890 13:55:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:34.890 13:55:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:34.890 13:55:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK1 00:27:34.890 13:55:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:34.890 13:55:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:34.890 13:55:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:34.890 13:55:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:34.890 13:55:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:27:35.831 13:55:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:35.831 13:55:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:35.831 13:55:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:35.831 13:55:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:35.831 13:55:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:38.372 13:55:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:38.372 13:55:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:38.372 13:55:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK2 00:27:38.372 13:55:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:38.372 13:55:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:38.372 13:55:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:38.372 13:55:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.372 13:55:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:27:39.315 13:55:32 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:39.315 13:55:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:39.315 13:55:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:39.315 13:55:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:39.315 13:55:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:41.226 13:55:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:41.226 13:55:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:41.226 13:55:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK3 00:27:41.226 13:55:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:41.226 13:55:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:41.226 13:55:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:41.226 13:55:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:41.226 13:55:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:27:43.151 13:55:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:43.151 13:55:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:43.151 13:55:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:43.151 13:55:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:43.151 13:55:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:45.063 13:55:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:45.063 13:55:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:45.063 13:55:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK4 00:27:45.063 13:55:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:45.063 13:55:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:45.063 13:55:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:45.063 13:55:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:45.063 13:55:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:27:46.447 13:55:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:46.447 13:55:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:46.447 13:55:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:46.447 13:55:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:46.447 13:55:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:48.358 13:55:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:48.358 13:55:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:48.358 13:55:40 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK5 00:27:48.358 13:55:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:48.358 13:55:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:48.358 13:55:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:48.358 13:55:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:48.358 13:55:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:27:49.747 13:55:42 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:49.747 13:55:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:49.747 13:55:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:49.747 13:55:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:49.747 13:55:42 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:51.663 13:55:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:51.663 13:55:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:51.663 13:55:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK6 00:27:51.663 13:55:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:51.663 13:55:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:51.663 13:55:44 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:51.663 13:55:44 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:51.663 13:55:44 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:27:53.047 13:55:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:53.047 13:55:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:53.047 13:55:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:53.047 13:55:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:53.047 13:55:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:55.600 13:55:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:55.600 13:55:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:55.600 13:55:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK7 00:27:55.600 13:55:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:55.600 13:55:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:55.600 13:55:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:55.600 13:55:47 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:55.600 13:55:47 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:27:56.629 13:55:49 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:56.629 13:55:49 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:56.629 13:55:49 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:56.629 13:55:49 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:56.629 13:55:49 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:58.538 13:55:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:58.538 13:55:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:58.538 13:55:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK8 00:27:58.538 13:55:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:58.538 13:55:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:58.538 13:55:51 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:58.538 13:55:51 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:58.538 13:55:51 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:28:00.449 13:55:52 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:28:00.449 13:55:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:00.449 13:55:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:00.449 13:55:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:00.449 13:55:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:02.357 13:55:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:02.357 13:55:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:02.357 13:55:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK9 00:28:02.357 13:55:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:02.357 13:55:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:02.357 13:55:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:02.357 13:55:54 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:02.357 13:55:54 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:28:03.296 13:55:56 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:28:03.296 13:55:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:03.296 13:55:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:03.296 13:55:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:03.296 13:55:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:05.834 13:55:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:05.834 13:55:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:05.834 13:55:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK10 00:28:05.834 13:55:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:05.834 13:55:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:05.834 13:55:58 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:05.834 13:55:58 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:05.834 13:55:58 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:28:06.773 13:55:59 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:28:06.773 13:55:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:28:06.773 13:55:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:06.773 13:55:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:06.773 13:55:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:28:09.311 13:56:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:09.311 13:56:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:09.311 13:56:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK11 00:28:09.311 13:56:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:09.311 13:56:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:09.311 13:56:01 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:28:09.311 13:56:01 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:28:09.311 [global] 00:28:09.311 thread=1 00:28:09.311 invalidate=1 00:28:09.311 rw=read 00:28:09.311 time_based=1 00:28:09.311 runtime=10 00:28:09.311 ioengine=libaio 00:28:09.311 direct=1 00:28:09.311 bs=262144 00:28:09.311 iodepth=64 00:28:09.311 norandommap=1 00:28:09.311 numjobs=1 00:28:09.311 00:28:09.311 [job0] 00:28:09.311 filename=/dev/nvme0n1 00:28:09.311 [job1] 00:28:09.311 filename=/dev/nvme10n1 00:28:09.311 [job2] 00:28:09.311 filename=/dev/nvme1n1 00:28:09.311 [job3] 00:28:09.311 filename=/dev/nvme2n1 00:28:09.311 [job4] 00:28:09.311 filename=/dev/nvme3n1 00:28:09.311 [job5] 00:28:09.311 filename=/dev/nvme4n1 00:28:09.311 [job6] 00:28:09.311 filename=/dev/nvme5n1 00:28:09.311 [job7] 00:28:09.311 filename=/dev/nvme6n1 00:28:09.311 [job8] 00:28:09.311 filename=/dev/nvme7n1 00:28:09.311 [job9] 00:28:09.311 filename=/dev/nvme8n1 00:28:09.311 [job10] 00:28:09.311 filename=/dev/nvme9n1 00:28:09.311 Could not set queue depth (nvme0n1) 00:28:09.311 Could not set queue depth (nvme10n1) 00:28:09.311 Could not set queue depth (nvme1n1) 00:28:09.311 Could not set queue depth (nvme2n1) 00:28:09.311 Could not set queue depth (nvme3n1) 00:28:09.311 Could not set queue depth (nvme4n1) 00:28:09.311 Could not set queue depth (nvme5n1) 00:28:09.311 Could not set queue depth (nvme6n1) 00:28:09.311 Could not set queue depth (nvme7n1) 00:28:09.311 Could not set queue depth (nvme8n1) 00:28:09.311 Could not set queue depth (nvme9n1) 00:28:09.311 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:09.311 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:09.311 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:09.311 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:09.311 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:09.311 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:09.311 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:09.311 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:09.311 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:09.311 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:09.311 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:09.311 fio-3.35 00:28:09.311 Starting 11 threads 00:28:21.540 00:28:21.540 job0: (groupid=0, jobs=1): err= 0: pid=2226316: Tue Jun 11 13:56:12 2024 00:28:21.540 read: IOPS=2371, BW=593MiB/s (622MB/s)(5966MiB/10061msec) 00:28:21.540 slat (usec): min=6, max=39466, avg=415.05, stdev=2181.66 00:28:21.540 clat (usec): min=1009, max=129132, avg=26532.95, stdev=21356.47 00:28:21.540 lat (usec): min=1041, max=129174, avg=26948.00, stdev=21771.68 00:28:21.540 clat percentiles (msec): 00:28:21.540 | 1.00th=[ 12], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:28:21.540 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 17], 00:28:21.540 | 70.00th=[ 26], 80.00th=[ 48], 90.00th=[ 68], 95.00th=[ 70], 00:28:21.540 | 99.00th=[ 75], 99.50th=[ 83], 99.90th=[ 111], 99.95th=[ 125], 00:28:21.540 | 99.99th=[ 130] 00:28:21.540 bw ( KiB/s): min=223232, max=1292288, per=14.38%, avg=609393.60, stdev=445518.01, samples=20 00:28:21.540 iops : min= 872, max= 5048, avg=2380.40, stdev=1740.25, samples=20 00:28:21.540 lat (msec) : 2=0.05%, 4=0.11%, 10=0.42%, 20=60.14%, 50=20.08% 00:28:21.540 lat (msec) : 100=18.84%, 250=0.36% 00:28:21.540 cpu : usr=0.37%, sys=4.32%, ctx=5226, majf=0, minf=4097 00:28:21.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:28:21.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.540 issued rwts: total=23863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.540 job1: (groupid=0, jobs=1): err= 0: pid=2226317: Tue Jun 11 13:56:12 2024 00:28:21.540 read: IOPS=1059, BW=265MiB/s (278MB/s)(2665MiB/10059msec) 00:28:21.540 slat (usec): min=6, max=30066, avg=936.33, stdev=2851.74 00:28:21.540 clat (msec): min=7, max=108, avg=59.37, stdev= 8.49 00:28:21.540 lat (msec): min=8, max=108, avg=60.31, stdev= 8.97 00:28:21.540 clat percentiles (msec): 00:28:21.540 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 50], 20.00th=[ 52], 00:28:21.540 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:28:21.540 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 71], 00:28:21.540 | 99.00th=[ 86], 99.50th=[ 88], 99.90th=[ 94], 99.95th=[ 104], 00:28:21.540 | 99.99th=[ 104] 00:28:21.540 bw ( KiB/s): min=214528, max=319488, per=6.40%, avg=271206.40, stdev=32460.07, samples=20 00:28:21.540 iops : min= 838, max= 1248, avg=1059.40, stdev=126.80, samples=20 00:28:21.540 lat (msec) : 10=0.06%, 20=0.25%, 50=12.21%, 100=87.43%, 250=0.06% 00:28:21.540 cpu : usr=0.47%, sys=3.66%, ctx=2198, majf=0, minf=3535 00:28:21.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:21.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.540 issued rwts: total=10658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.540 job2: (groupid=0, jobs=1): err= 0: pid=2226320: Tue Jun 11 13:56:12 2024 00:28:21.540 read: IOPS=1136, BW=284MiB/s (298MB/s)(2856MiB/10055msec) 00:28:21.540 slat (usec): min=6, max=39559, avg=859.20, stdev=2849.92 00:28:21.540 clat (msec): min=7, max=123, avg=55.40, stdev=12.71 00:28:21.540 lat (msec): min=7, max=123, avg=56.26, stdev=13.15 00:28:21.540 clat percentiles (msec): 00:28:21.540 | 1.00th=[ 25], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 46], 00:28:21.540 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 55], 60.00th=[ 58], 00:28:21.540 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 70], 95.00th=[ 72], 00:28:21.540 | 99.00th=[ 79], 99.50th=[ 101], 99.90th=[ 118], 99.95th=[ 123], 00:28:21.540 | 99.99th=[ 124] 00:28:21.540 bw ( KiB/s): min=220672, max=445306, per=6.86%, avg=290730.90, stdev=63355.01, samples=20 00:28:21.540 iops : min= 862, max= 1739, avg=1135.60, stdev=247.47, samples=20 00:28:21.541 lat (msec) : 10=0.22%, 20=0.37%, 50=29.67%, 100=69.24%, 250=0.50% 00:28:21.541 cpu : usr=0.31%, sys=3.63%, ctx=2500, majf=0, minf=4097 00:28:21.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:28:21.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.541 issued rwts: total=11424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.541 job3: (groupid=0, jobs=1): err= 0: pid=2226321: Tue Jun 11 13:56:12 2024 00:28:21.541 read: IOPS=1139, BW=285MiB/s (299MB/s)(2864MiB/10054msec) 00:28:21.541 slat (usec): min=6, max=38146, avg=871.12, stdev=3059.47 00:28:21.541 clat (msec): min=8, max=129, avg=55.23, stdev=12.48 00:28:21.541 lat (msec): min=8, max=129, avg=56.10, stdev=12.97 00:28:21.541 clat percentiles (msec): 00:28:21.541 | 1.00th=[ 25], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 46], 00:28:21.541 | 30.00th=[ 50], 40.00th=[ 52], 50.00th=[ 54], 60.00th=[ 58], 00:28:21.541 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 70], 95.00th=[ 72], 00:28:21.541 | 99.00th=[ 86], 99.50th=[ 93], 99.90th=[ 103], 99.95th=[ 107], 00:28:21.541 | 99.99th=[ 130] 00:28:21.541 bw ( KiB/s): min=221528, max=440320, per=6.88%, avg=291575.60, stdev=63663.42, samples=20 00:28:21.541 iops : min= 865, max= 1720, avg=1138.95, stdev=248.71, samples=20 00:28:21.541 lat (msec) : 10=0.19%, 20=0.50%, 50=30.06%, 100=69.04%, 250=0.21% 00:28:21.541 cpu : usr=0.28%, sys=3.47%, ctx=2481, majf=0, minf=4097 00:28:21.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:28:21.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.541 issued rwts: total=11457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.541 job4: (groupid=0, jobs=1): err= 0: pid=2226322: Tue Jun 11 13:56:12 2024 00:28:21.541 read: IOPS=1058, BW=265MiB/s (277MB/s)(2660MiB/10054msec) 00:28:21.541 slat (usec): min=7, max=20982, avg=936.57, stdev=2507.21 00:28:21.541 clat (msec): min=9, max=113, avg=59.47, stdev= 8.33 00:28:21.541 lat (msec): min=10, max=115, avg=60.41, stdev= 8.72 00:28:21.541 clat percentiles (msec): 00:28:21.541 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 52], 00:28:21.541 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:28:21.541 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 69], 95.00th=[ 71], 00:28:21.541 | 99.00th=[ 85], 99.50th=[ 86], 99.90th=[ 103], 99.95th=[ 106], 00:28:21.541 | 99.99th=[ 114] 00:28:21.541 bw ( KiB/s): min=222208, max=316928, per=6.38%, avg=270569.90, stdev=31415.11, samples=20 00:28:21.541 iops : min= 868, max= 1238, avg=1056.90, stdev=122.72, samples=20 00:28:21.541 lat (msec) : 10=0.01%, 20=0.28%, 50=11.33%, 100=88.25%, 250=0.13% 00:28:21.541 cpu : usr=0.32%, sys=3.85%, ctx=2181, majf=0, minf=4097 00:28:21.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:21.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.541 issued rwts: total=10639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.541 job5: (groupid=0, jobs=1): err= 0: pid=2226323: Tue Jun 11 13:56:12 2024 00:28:21.541 read: IOPS=1808, BW=452MiB/s (474MB/s)(4544MiB/10050msec) 00:28:21.541 slat (usec): min=6, max=48372, avg=543.83, stdev=2412.81 00:28:21.541 clat (msec): min=10, max=117, avg=34.81, stdev=19.16 00:28:21.541 lat (msec): min=10, max=119, avg=35.36, stdev=19.57 00:28:21.541 clat percentiles (msec): 00:28:21.541 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 22], 00:28:21.541 | 30.00th=[ 23], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 36], 00:28:21.541 | 70.00th=[ 40], 80.00th=[ 46], 90.00th=[ 69], 95.00th=[ 70], 00:28:21.541 | 99.00th=[ 75], 99.50th=[ 81], 99.90th=[ 110], 99.95th=[ 114], 00:28:21.541 | 99.99th=[ 116] 00:28:21.541 bw ( KiB/s): min=222342, max=1033728, per=10.94%, avg=463597.10, stdev=249558.03, samples=20 00:28:21.541 iops : min= 868, max= 4038, avg=1810.90, stdev=974.86, samples=20 00:28:21.541 lat (msec) : 20=14.92%, 50=65.33%, 100=19.58%, 250=0.17% 00:28:21.541 cpu : usr=0.32%, sys=4.28%, ctx=3911, majf=0, minf=4097 00:28:21.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:28:21.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.541 issued rwts: total=18175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.541 job6: (groupid=0, jobs=1): err= 0: pid=2226325: Tue Jun 11 13:56:12 2024 00:28:21.541 read: IOPS=1037, BW=259MiB/s (272MB/s)(2606MiB/10050msec) 00:28:21.541 slat (usec): min=6, max=20233, avg=947.62, stdev=2440.31 00:28:21.541 clat (msec): min=10, max=111, avg=60.70, stdev= 7.94 00:28:21.541 lat (msec): min=10, max=111, avg=61.65, stdev= 8.33 00:28:21.541 clat percentiles (msec): 00:28:21.541 | 1.00th=[ 43], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:28:21.541 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 62], 60.00th=[ 63], 00:28:21.541 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 69], 95.00th=[ 71], 00:28:21.541 | 99.00th=[ 85], 99.50th=[ 88], 99.90th=[ 106], 99.95th=[ 108], 00:28:21.541 | 99.99th=[ 112] 00:28:21.541 bw ( KiB/s): min=221696, max=315392, per=6.25%, avg=265089.60, stdev=28326.05, samples=20 00:28:21.541 iops : min= 866, max= 1232, avg=1035.45, stdev=110.71, samples=20 00:28:21.541 lat (msec) : 20=0.18%, 50=6.59%, 100=93.09%, 250=0.13% 00:28:21.541 cpu : usr=0.38%, sys=3.38%, ctx=2231, majf=0, minf=4097 00:28:21.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:21.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.541 issued rwts: total=10422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.541 job7: (groupid=0, jobs=1): err= 0: pid=2226326: Tue Jun 11 13:56:12 2024 00:28:21.541 read: IOPS=1526, BW=382MiB/s (400MB/s)(3837MiB/10056msec) 00:28:21.541 slat (usec): min=6, max=37265, avg=647.87, stdev=2236.69 00:28:21.541 clat (msec): min=8, max=116, avg=41.23, stdev=16.36 00:28:21.541 lat (msec): min=8, max=121, avg=41.88, stdev=16.72 00:28:21.541 clat percentiles (msec): 00:28:21.541 | 1.00th=[ 24], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 27], 00:28:21.541 | 30.00th=[ 28], 40.00th=[ 34], 50.00th=[ 38], 60.00th=[ 40], 00:28:21.541 | 70.00th=[ 45], 80.00th=[ 62], 90.00th=[ 64], 95.00th=[ 66], 00:28:21.541 | 99.00th=[ 87], 99.50th=[ 90], 99.90th=[ 110], 99.95th=[ 114], 00:28:21.541 | 99.99th=[ 116] 00:28:21.541 bw ( KiB/s): min=220672, max=620544, per=9.22%, avg=390998.30, stdev=148342.92, samples=20 00:28:21.541 iops : min= 862, max= 2424, avg=1527.30, stdev=579.50, samples=20 00:28:21.542 lat (msec) : 10=0.08%, 20=0.28%, 50=70.31%, 100=29.15%, 250=0.17% 00:28:21.542 cpu : usr=0.40%, sys=4.12%, ctx=3245, majf=0, minf=4097 00:28:21.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:28:21.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.542 issued rwts: total=15346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.542 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.542 job8: (groupid=0, jobs=1): err= 0: pid=2226327: Tue Jun 11 13:56:12 2024 00:28:21.542 read: IOPS=3274, BW=819MiB/s (859MB/s)(8196MiB/10010msec) 00:28:21.542 slat (usec): min=6, max=11996, avg=303.38, stdev=834.23 00:28:21.542 clat (usec): min=8454, max=50247, avg=19213.95, stdev=10100.80 00:28:21.542 lat (usec): min=8659, max=54010, avg=19517.32, stdev=10275.03 00:28:21.542 clat percentiles (usec): 00:28:21.542 | 1.00th=[ 9765], 5.00th=[10159], 10.00th=[10683], 20.00th=[10945], 00:28:21.542 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12125], 60.00th=[21627], 00:28:21.542 | 70.00th=[25035], 80.00th=[26870], 90.00th=[36963], 95.00th=[39584], 00:28:21.542 | 99.00th=[42730], 99.50th=[43779], 99.90th=[46400], 99.95th=[48497], 00:28:21.542 | 99.99th=[49021] 00:28:21.542 bw ( KiB/s): min=399134, max=1461760, per=19.03%, avg=806765.37, stdev=418367.89, samples=19 00:28:21.542 iops : min= 1559, max= 5710, avg=3151.42, stdev=1634.26, samples=19 00:28:21.542 lat (msec) : 10=3.11%, 20=52.65%, 50=44.24%, 100=0.01% 00:28:21.542 cpu : usr=0.51%, sys=5.93%, ctx=6607, majf=0, minf=4097 00:28:21.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:21.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.542 issued rwts: total=32782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.542 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.542 job9: (groupid=0, jobs=1): err= 0: pid=2226328: Tue Jun 11 13:56:12 2024 00:28:21.542 read: IOPS=1140, BW=285MiB/s (299MB/s)(2865MiB/10049msec) 00:28:21.542 slat (usec): min=6, max=21153, avg=869.65, stdev=2397.22 00:28:21.542 clat (msec): min=8, max=118, avg=55.20, stdev=12.22 00:28:21.542 lat (msec): min=8, max=125, avg=56.07, stdev=12.58 00:28:21.542 clat percentiles (msec): 00:28:21.542 | 1.00th=[ 26], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 46], 00:28:21.542 | 30.00th=[ 50], 40.00th=[ 52], 50.00th=[ 55], 60.00th=[ 58], 00:28:21.542 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 70], 95.00th=[ 72], 00:28:21.542 | 99.00th=[ 79], 99.50th=[ 84], 99.90th=[ 103], 99.95th=[ 108], 00:28:21.542 | 99.99th=[ 120] 00:28:21.542 bw ( KiB/s): min=223744, max=441856, per=6.88%, avg=291788.80, stdev=63418.18, samples=20 00:28:21.542 iops : min= 874, max= 1726, avg=1139.80, stdev=247.73, samples=20 00:28:21.542 lat (msec) : 10=0.11%, 20=0.44%, 50=30.46%, 100=68.85%, 250=0.14% 00:28:21.542 cpu : usr=0.49%, sys=3.37%, ctx=2465, majf=0, minf=4097 00:28:21.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:28:21.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.542 issued rwts: total=11461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.542 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.542 job10: (groupid=0, jobs=1): err= 0: pid=2226329: Tue Jun 11 13:56:12 2024 00:28:21.542 read: IOPS=1029, BW=257MiB/s (270MB/s)(2589MiB/10061msec) 00:28:21.542 slat (usec): min=7, max=28037, avg=948.38, stdev=2845.21 00:28:21.542 clat (msec): min=8, max=121, avg=61.14, stdev= 9.83 00:28:21.542 lat (msec): min=9, max=121, avg=62.09, stdev=10.27 00:28:21.542 clat percentiles (msec): 00:28:21.542 | 1.00th=[ 49], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 52], 00:28:21.542 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 65], 00:28:21.542 | 70.00th=[ 68], 80.00th=[ 70], 90.00th=[ 71], 95.00th=[ 74], 00:28:21.542 | 99.00th=[ 88], 99.50th=[ 92], 99.90th=[ 115], 99.95th=[ 118], 00:28:21.542 | 99.99th=[ 122] 00:28:21.542 bw ( KiB/s): min=224768, max=316928, per=6.22%, avg=263500.80, stdev=34109.09, samples=20 00:28:21.542 iops : min= 878, max= 1238, avg=1029.30, stdev=133.24, samples=20 00:28:21.542 lat (msec) : 10=0.05%, 20=0.27%, 50=7.84%, 100=91.60%, 250=0.24% 00:28:21.542 cpu : usr=0.42%, sys=3.40%, ctx=2233, majf=0, minf=4097 00:28:21.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:21.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:21.542 issued rwts: total=10356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.542 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:21.542 00:28:21.542 Run status group 0 (all jobs): 00:28:21.542 READ: bw=4139MiB/s (4340MB/s), 257MiB/s-819MiB/s (270MB/s-859MB/s), io=40.7GiB (43.7GB), run=10010-10061msec 00:28:21.542 00:28:21.542 Disk stats (read/write): 00:28:21.542 nvme0n1: ios=47514/0, merge=0/0, ticks=1221070/0, in_queue=1221070, util=97.26% 00:28:21.542 nvme10n1: ios=21062/0, merge=0/0, ticks=1228428/0, in_queue=1228428, util=97.46% 00:28:21.542 nvme1n1: ios=22649/0, merge=0/0, ticks=1227482/0, in_queue=1227482, util=97.67% 00:28:21.542 nvme2n1: ios=22667/0, merge=0/0, ticks=1225543/0, in_queue=1225543, util=97.79% 00:28:21.542 nvme3n1: ios=21036/0, merge=0/0, ticks=1227882/0, in_queue=1227882, util=97.89% 00:28:21.542 nvme4n1: ios=36097/0, merge=0/0, ticks=1223757/0, in_queue=1223757, util=98.14% 00:28:21.542 nvme5n1: ios=20596/0, merge=0/0, ticks=1228319/0, in_queue=1228319, util=98.30% 00:28:21.542 nvme6n1: ios=30459/0, merge=0/0, ticks=1223993/0, in_queue=1223993, util=98.49% 00:28:21.542 nvme7n1: ios=64808/0, merge=0/0, ticks=1224788/0, in_queue=1224788, util=98.87% 00:28:21.542 nvme8n1: ios=22680/0, merge=0/0, ticks=1227265/0, in_queue=1227265, util=98.97% 00:28:21.542 nvme9n1: ios=20501/0, merge=0/0, ticks=1228600/0, in_queue=1228600, util=99.21% 00:28:21.542 13:56:12 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:28:21.542 [global] 00:28:21.542 thread=1 00:28:21.542 invalidate=1 00:28:21.542 rw=randwrite 00:28:21.542 time_based=1 00:28:21.542 runtime=10 00:28:21.542 ioengine=libaio 00:28:21.542 direct=1 00:28:21.542 bs=262144 00:28:21.542 iodepth=64 00:28:21.542 norandommap=1 00:28:21.542 numjobs=1 00:28:21.542 00:28:21.542 [job0] 00:28:21.542 filename=/dev/nvme0n1 00:28:21.542 [job1] 00:28:21.542 filename=/dev/nvme10n1 00:28:21.542 [job2] 00:28:21.542 filename=/dev/nvme1n1 00:28:21.542 [job3] 00:28:21.542 filename=/dev/nvme2n1 00:28:21.542 [job4] 00:28:21.542 filename=/dev/nvme3n1 00:28:21.542 [job5] 00:28:21.542 filename=/dev/nvme4n1 00:28:21.542 [job6] 00:28:21.542 filename=/dev/nvme5n1 00:28:21.543 [job7] 00:28:21.543 filename=/dev/nvme6n1 00:28:21.543 [job8] 00:28:21.543 filename=/dev/nvme7n1 00:28:21.543 [job9] 00:28:21.543 filename=/dev/nvme8n1 00:28:21.543 [job10] 00:28:21.543 filename=/dev/nvme9n1 00:28:21.543 Could not set queue depth (nvme0n1) 00:28:21.543 Could not set queue depth (nvme10n1) 00:28:21.543 Could not set queue depth (nvme1n1) 00:28:21.543 Could not set queue depth (nvme2n1) 00:28:21.543 Could not set queue depth (nvme3n1) 00:28:21.543 Could not set queue depth (nvme4n1) 00:28:21.543 Could not set queue depth (nvme5n1) 00:28:21.543 Could not set queue depth (nvme6n1) 00:28:21.543 Could not set queue depth (nvme7n1) 00:28:21.543 Could not set queue depth (nvme8n1) 00:28:21.543 Could not set queue depth (nvme9n1) 00:28:21.543 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:21.543 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:21.543 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:21.543 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:21.543 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:21.543 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:21.543 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:21.543 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:21.543 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:21.543 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:21.543 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:21.543 fio-3.35 00:28:21.543 Starting 11 threads 00:28:31.538 00:28:31.538 job0: (groupid=0, jobs=1): err= 0: pid=2228417: Tue Jun 11 13:56:23 2024 00:28:31.538 write: IOPS=1397, BW=349MiB/s (366MB/s)(3510MiB/10045msec); 0 zone resets 00:28:31.538 slat (usec): min=16, max=8723, avg=708.59, stdev=1257.61 00:28:31.538 clat (msec): min=2, max=100, avg=45.07, stdev=10.87 00:28:31.538 lat (msec): min=2, max=100, avg=45.78, stdev=11.00 00:28:31.538 clat percentiles (msec): 00:28:31.538 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 32], 00:28:31.538 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:28:31.538 | 70.00th=[ 48], 80.00th=[ 58], 90.00th=[ 61], 95.00th=[ 62], 00:28:31.538 | 99.00th=[ 64], 99.50th=[ 64], 99.90th=[ 86], 99.95th=[ 93], 00:28:31.538 | 99.99th=[ 101] 00:28:31.538 bw ( KiB/s): min=268288, max=558592, per=8.56%, avg=357796.00, stdev=89617.69, samples=20 00:28:31.538 iops : min= 1048, max= 2182, avg=1397.60, stdev=350.07, samples=20 00:28:31.538 lat (msec) : 4=0.01%, 10=0.06%, 20=0.11%, 50=71.88%, 100=27.92% 00:28:31.538 lat (msec) : 250=0.02% 00:28:31.538 cpu : usr=3.09%, sys=4.50%, ctx=3414, majf=0, minf=1 00:28:31.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:28:31.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.538 issued rwts: total=0,14038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.538 job1: (groupid=0, jobs=1): err= 0: pid=2228430: Tue Jun 11 13:56:23 2024 00:28:31.538 write: IOPS=1020, BW=255MiB/s (268MB/s)(2565MiB/10052msec); 0 zone resets 00:28:31.538 slat (usec): min=20, max=23068, avg=970.35, stdev=2069.87 00:28:31.538 clat (msec): min=2, max=121, avg=61.71, stdev=13.03 00:28:31.538 lat (msec): min=2, max=121, avg=62.68, stdev=13.28 00:28:31.538 clat percentiles (msec): 00:28:31.538 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 46], 00:28:31.538 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 69], 60.00th=[ 70], 00:28:31.538 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 75], 95.00th=[ 78], 00:28:31.538 | 99.00th=[ 83], 99.50th=[ 91], 99.90th=[ 114], 99.95th=[ 121], 00:28:31.538 | 99.99th=[ 122] 00:28:31.538 bw ( KiB/s): min=217088, max=367616, per=6.25%, avg=261065.95, stdev=54884.55, samples=20 00:28:31.538 iops : min= 848, max= 1436, avg=1019.75, stdev=214.42, samples=20 00:28:31.538 lat (msec) : 4=0.01%, 10=0.09%, 20=0.12%, 50=28.24%, 100=71.34% 00:28:31.538 lat (msec) : 250=0.21% 00:28:31.538 cpu : usr=2.48%, sys=3.41%, ctx=2545, majf=0, minf=1 00:28:31.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:31.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.538 issued rwts: total=0,10260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.538 job2: (groupid=0, jobs=1): err= 0: pid=2228431: Tue Jun 11 13:56:23 2024 00:28:31.538 write: IOPS=1078, BW=270MiB/s (283MB/s)(2710MiB/10051msec); 0 zone resets 00:28:31.538 slat (usec): min=16, max=29008, avg=901.01, stdev=2044.67 00:28:31.538 clat (msec): min=11, max=124, avg=58.43, stdev=15.81 00:28:31.538 lat (msec): min=11, max=124, avg=59.33, stdev=16.11 00:28:31.538 clat percentiles (msec): 00:28:31.538 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 40], 00:28:31.538 | 30.00th=[ 42], 40.00th=[ 56], 50.00th=[ 65], 60.00th=[ 70], 00:28:31.538 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 74], 95.00th=[ 77], 00:28:31.538 | 99.00th=[ 82], 99.50th=[ 92], 99.90th=[ 114], 99.95th=[ 116], 00:28:31.538 | 99.99th=[ 125] 00:28:31.538 bw ( KiB/s): min=216064, max=422400, per=6.60%, avg=275865.60, stdev=76658.95, samples=20 00:28:31.538 iops : min= 844, max= 1650, avg=1077.60, stdev=299.45, samples=20 00:28:31.538 lat (msec) : 20=0.46%, 50=32.32%, 100=66.99%, 250=0.23% 00:28:31.538 cpu : usr=2.13%, sys=3.52%, ctx=2687, majf=0, minf=1 00:28:31.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:28:31.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.538 issued rwts: total=0,10839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.538 job3: (groupid=0, jobs=1): err= 0: pid=2228432: Tue Jun 11 13:56:23 2024 00:28:31.538 write: IOPS=1255, BW=314MiB/s (329MB/s)(3152MiB/10044msec); 0 zone resets 00:28:31.538 slat (usec): min=18, max=11159, avg=781.68, stdev=1396.68 00:28:31.538 clat (msec): min=13, max=100, avg=50.19, stdev= 7.82 00:28:31.538 lat (msec): min=13, max=100, avg=50.97, stdev= 7.90 00:28:31.538 clat percentiles (msec): 00:28:31.538 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 44], 00:28:31.538 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 47], 60.00th=[ 55], 00:28:31.538 | 70.00th=[ 57], 80.00th=[ 59], 90.00th=[ 61], 95.00th=[ 62], 00:28:31.538 | 99.00th=[ 64], 99.50th=[ 65], 99.90th=[ 89], 99.95th=[ 93], 00:28:31.538 | 99.99th=[ 101] 00:28:31.538 bw ( KiB/s): min=266752, max=373248, per=7.68%, avg=321126.40, stdev=44764.33, samples=20 00:28:31.538 iops : min= 1042, max= 1458, avg=1254.40, stdev=174.86, samples=20 00:28:31.538 lat (msec) : 20=0.10%, 50=56.82%, 100=43.06%, 250=0.02% 00:28:31.538 cpu : usr=2.71%, sys=4.42%, ctx=3169, majf=0, minf=1 00:28:31.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:28:31.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.539 issued rwts: total=0,12607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.539 job4: (groupid=0, jobs=1): err= 0: pid=2228433: Tue Jun 11 13:56:23 2024 00:28:31.539 write: IOPS=2839, BW=710MiB/s (744MB/s)(7105MiB/10009msec); 0 zone resets 00:28:31.539 slat (usec): min=10, max=10248, avg=348.46, stdev=711.45 00:28:31.539 clat (usec): min=8068, max=49885, avg=22185.48, stdev=10510.53 00:28:31.539 lat (usec): min=9269, max=50529, avg=22533.94, stdev=10666.07 00:28:31.539 clat percentiles (usec): 00:28:31.539 | 1.00th=[12518], 5.00th=[13042], 10.00th=[13173], 20.00th=[13566], 00:28:31.539 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14484], 60.00th=[27657], 00:28:31.539 | 70.00th=[29230], 80.00th=[30016], 90.00th=[41157], 95.00th=[44303], 00:28:31.539 | 99.00th=[46400], 99.50th=[46924], 99.90th=[48497], 99.95th=[48497], 00:28:31.539 | 99.99th=[49546] 00:28:31.539 bw ( KiB/s): min=358912, max=1182208, per=16.82%, avg=703005.95, stdev=333429.48, samples=19 00:28:31.539 iops : min= 1402, max= 4618, avg=2746.21, stdev=1302.35, samples=19 00:28:31.539 lat (msec) : 10=0.02%, 20=56.17%, 50=43.81% 00:28:31.539 cpu : usr=3.89%, sys=4.63%, ctx=6433, majf=0, minf=1 00:28:31.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:31.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.539 issued rwts: total=0,28420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.539 job5: (groupid=0, jobs=1): err= 0: pid=2228434: Tue Jun 11 13:56:23 2024 00:28:31.539 write: IOPS=1040, BW=260MiB/s (273MB/s)(2614MiB/10051msec); 0 zone resets 00:28:31.539 slat (usec): min=17, max=24277, avg=910.34, stdev=1969.44 00:28:31.539 clat (msec): min=14, max=121, avg=60.61, stdev=13.75 00:28:31.539 lat (msec): min=14, max=128, avg=61.52, stdev=14.02 00:28:31.539 clat percentiles (msec): 00:28:31.539 | 1.00th=[ 28], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 45], 00:28:31.539 | 30.00th=[ 47], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 70], 00:28:31.539 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 75], 95.00th=[ 77], 00:28:31.539 | 99.00th=[ 83], 99.50th=[ 87], 99.90th=[ 114], 99.95th=[ 118], 00:28:31.539 | 99.99th=[ 122] 00:28:31.539 bw ( KiB/s): min=216064, max=369152, per=6.37%, avg=266009.60, stdev=56001.73, samples=20 00:28:31.539 iops : min= 844, max= 1442, avg=1039.10, stdev=218.76, samples=20 00:28:31.539 lat (msec) : 20=0.10%, 50=31.34%, 100=68.34%, 250=0.23% 00:28:31.539 cpu : usr=2.05%, sys=3.69%, ctx=2740, majf=0, minf=1 00:28:31.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:31.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.539 issued rwts: total=0,10454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.539 job6: (groupid=0, jobs=1): err= 0: pid=2228435: Tue Jun 11 13:56:23 2024 00:28:31.539 write: IOPS=1957, BW=489MiB/s (513MB/s)(4903MiB/10016msec); 0 zone resets 00:28:31.539 slat (usec): min=11, max=21301, avg=493.81, stdev=1250.66 00:28:31.539 clat (usec): min=1690, max=90627, avg=32185.89, stdev=19826.69 00:28:31.539 lat (usec): min=1742, max=92261, avg=32679.70, stdev=20141.53 00:28:31.539 clat percentiles (usec): 00:28:31.539 | 1.00th=[11338], 5.00th=[16909], 10.00th=[17433], 20.00th=[17957], 00:28:31.539 | 30.00th=[18220], 40.00th=[18482], 50.00th=[19006], 60.00th=[19792], 00:28:31.539 | 70.00th=[43779], 80.00th=[54264], 90.00th=[69731], 95.00th=[71828], 00:28:31.539 | 99.00th=[76022], 99.50th=[77071], 99.90th=[82314], 99.95th=[84411], 00:28:31.539 | 99.99th=[88605] 00:28:31.539 bw ( KiB/s): min=224768, max=894976, per=11.98%, avg=500428.80, stdev=289156.63, samples=20 00:28:31.539 iops : min= 878, max= 3496, avg=1954.80, stdev=1129.52, samples=20 00:28:31.539 lat (msec) : 2=0.03%, 4=0.16%, 10=0.61%, 20=60.10%, 50=17.72% 00:28:31.539 lat (msec) : 100=21.38% 00:28:31.539 cpu : usr=3.12%, sys=4.80%, ctx=4470, majf=0, minf=1 00:28:31.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:28:31.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.539 issued rwts: total=0,19611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.539 job7: (groupid=0, jobs=1): err= 0: pid=2228436: Tue Jun 11 13:56:23 2024 00:28:31.539 write: IOPS=1795, BW=449MiB/s (471MB/s)(4508MiB/10044msec); 0 zone resets 00:28:31.539 slat (usec): min=9, max=26292, avg=543.27, stdev=1172.80 00:28:31.539 clat (usec): min=1410, max=98939, avg=35093.12, stdev=17250.36 00:28:31.539 lat (usec): min=1934, max=98982, avg=35636.39, stdev=17496.16 00:28:31.539 clat percentiles (usec): 00:28:31.539 | 1.00th=[16909], 5.00th=[17695], 10.00th=[18482], 20.00th=[19006], 00:28:31.539 | 30.00th=[19530], 40.00th=[20055], 50.00th=[34341], 60.00th=[38536], 00:28:31.539 | 70.00th=[50594], 80.00th=[57410], 90.00th=[60031], 95.00th=[61080], 00:28:31.539 | 99.00th=[63177], 99.50th=[63701], 99.90th=[81265], 99.95th=[88605], 00:28:31.539 | 99.99th=[95945] 00:28:31.539 bw ( KiB/s): min=268288, max=838144, per=11.01%, avg=459954.65, stdev=219619.14, samples=20 00:28:31.539 iops : min= 1048, max= 3274, avg=1796.65, stdev=857.82, samples=20 00:28:31.539 lat (msec) : 2=0.01%, 4=0.03%, 10=0.21%, 20=38.88%, 50=30.75% 00:28:31.539 lat (msec) : 100=30.12% 00:28:31.539 cpu : usr=3.30%, sys=3.73%, ctx=4079, majf=0, minf=1 00:28:31.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:28:31.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.539 issued rwts: total=0,18033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.539 job8: (groupid=0, jobs=1): err= 0: pid=2228437: Tue Jun 11 13:56:23 2024 00:28:31.539 write: IOPS=1398, BW=350MiB/s (367MB/s)(3511MiB/10043msec); 0 zone resets 00:28:31.539 slat (usec): min=17, max=11248, avg=708.41, stdev=1266.20 00:28:31.539 clat (msec): min=13, max=100, avg=45.05, stdev=10.82 00:28:31.539 lat (msec): min=13, max=100, avg=45.76, stdev=10.95 00:28:31.539 clat percentiles (msec): 00:28:31.539 | 1.00th=[ 28], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 32], 00:28:31.539 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 46], 00:28:31.539 | 70.00th=[ 48], 80.00th=[ 58], 90.00th=[ 61], 95.00th=[ 62], 00:28:31.539 | 99.00th=[ 64], 99.50th=[ 64], 99.90th=[ 86], 99.95th=[ 93], 00:28:31.539 | 99.99th=[ 101] 00:28:31.539 bw ( KiB/s): min=266752, max=558592, per=8.56%, avg=357888.00, stdev=89419.12, samples=20 00:28:31.539 iops : min= 1042, max= 2182, avg=1398.00, stdev=349.29, samples=20 00:28:31.539 lat (msec) : 20=0.10%, 50=71.97%, 100=27.91%, 250=0.01% 00:28:31.539 cpu : usr=3.20%, sys=4.30%, ctx=3440, majf=0, minf=1 00:28:31.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:28:31.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.539 issued rwts: total=0,14043,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.539 job9: (groupid=0, jobs=1): err= 0: pid=2228438: Tue Jun 11 13:56:23 2024 00:28:31.539 write: IOPS=1544, BW=386MiB/s (405MB/s)(3883MiB/10054msec); 0 zone resets 00:28:31.539 slat (usec): min=15, max=31538, avg=638.24, stdev=1356.41 00:28:31.539 clat (msec): min=2, max=117, avg=40.78, stdev=17.15 00:28:31.539 lat (msec): min=2, max=117, avg=41.42, stdev=17.42 00:28:31.539 clat percentiles (msec): 00:28:31.539 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 29], 00:28:31.539 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 37], 00:28:31.539 | 70.00th=[ 45], 80.00th=[ 48], 90.00th=[ 72], 95.00th=[ 75], 00:28:31.539 | 99.00th=[ 80], 99.50th=[ 82], 99.90th=[ 102], 99.95th=[ 117], 00:28:31.539 | 99.99th=[ 118] 00:28:31.539 bw ( KiB/s): min=215552, max=558592, per=9.48%, avg=395980.80, stdev=147616.79, samples=20 00:28:31.539 iops : min= 842, max= 2182, avg=1546.80, stdev=576.63, samples=20 00:28:31.539 lat (msec) : 4=0.02%, 10=0.10%, 20=0.23%, 50=80.00%, 100=19.52% 00:28:31.539 lat (msec) : 250=0.13% 00:28:31.539 cpu : usr=2.98%, sys=4.04%, ctx=3639, majf=0, minf=1 00:28:31.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:28:31.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.539 issued rwts: total=0,15531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.539 job10: (groupid=0, jobs=1): err= 0: pid=2228439: Tue Jun 11 13:56:23 2024 00:28:31.539 write: IOPS=1022, BW=256MiB/s (268MB/s)(2569MiB/10052msec); 0 zone resets 00:28:31.539 slat (usec): min=18, max=20393, avg=968.87, stdev=1999.98 00:28:31.539 clat (msec): min=16, max=119, avg=61.62, stdev=12.80 00:28:31.539 lat (msec): min=16, max=124, avg=62.59, stdev=13.04 00:28:31.539 clat percentiles (msec): 00:28:31.539 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 46], 00:28:31.539 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 70], 00:28:31.539 | 70.00th=[ 72], 80.00th=[ 73], 90.00th=[ 75], 95.00th=[ 78], 00:28:31.539 | 99.00th=[ 81], 99.50th=[ 83], 99.90th=[ 112], 99.95th=[ 116], 00:28:31.539 | 99.99th=[ 120] 00:28:31.539 bw ( KiB/s): min=217088, max=369152, per=6.26%, avg=261427.20, stdev=55590.24, samples=20 00:28:31.539 iops : min= 848, max= 1442, avg=1021.20, stdev=217.15, samples=20 00:28:31.539 lat (msec) : 20=0.11%, 50=28.25%, 100=71.47%, 250=0.17% 00:28:31.539 cpu : usr=2.28%, sys=3.54%, ctx=2554, majf=0, minf=1 00:28:31.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:31.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:31.539 issued rwts: total=0,10275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:31.539 00:28:31.539 Run status group 0 (all jobs): 00:28:31.539 WRITE: bw=4081MiB/s (4279MB/s), 255MiB/s-710MiB/s (268MB/s-744MB/s), io=40.1GiB (43.0GB), run=10009-10054msec 00:28:31.539 00:28:31.539 Disk stats (read/write): 00:28:31.539 nvme0n1: ios=49/27790, merge=0/0, ticks=14/1220705, in_queue=1220719, util=97.26% 00:28:31.539 nvme10n1: ios=0/20270, merge=0/0, ticks=0/1217662, in_queue=1217662, util=97.31% 00:28:31.540 nvme1n1: ios=0/21426, merge=0/0, ticks=0/1221833, in_queue=1221833, util=97.58% 00:28:31.540 nvme2n1: ios=0/24928, merge=0/0, ticks=0/1223107, in_queue=1223107, util=97.74% 00:28:31.540 nvme3n1: ios=0/56034, merge=0/0, ticks=0/1228961, in_queue=1228961, util=97.80% 00:28:31.540 nvme4n1: ios=0/20660, merge=0/0, ticks=0/1220729, in_queue=1220729, util=98.11% 00:28:31.540 nvme5n1: ios=0/38586, merge=0/0, ticks=0/1226937, in_queue=1226937, util=98.27% 00:28:31.540 nvme6n1: ios=0/35780, merge=0/0, ticks=0/1226381, in_queue=1226381, util=98.37% 00:28:31.540 nvme7n1: ios=0/27800, merge=0/0, ticks=0/1220067, in_queue=1220067, util=98.75% 00:28:31.540 nvme8n1: ios=0/30809, merge=0/0, ticks=0/1219594, in_queue=1219594, util=98.95% 00:28:31.540 nvme9n1: ios=0/20294, merge=0/0, ticks=0/1218374, in_queue=1218374, util=99.06% 00:28:31.540 13:56:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:28:31.540 13:56:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:28:31.540 13:56:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:31.540 13:56:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:32.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK1 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK1 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:32.484 13:56:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:33.428 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:33.428 13:56:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:33.428 13:56:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:28:33.428 13:56:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:33.428 13:56:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK2 00:28:33.428 13:56:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:33.428 13:56:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK2 00:28:33.690 13:56:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:28:33.690 13:56:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:33.690 13:56:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.690 13:56:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:33.690 13:56:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.690 13:56:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:33.690 13:56:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:35.073 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK3 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK3 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:35.073 13:56:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:36.014 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:36.014 13:56:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:36.014 13:56:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:28:36.014 13:56:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:36.014 13:56:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK4 00:28:36.014 13:56:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:36.014 13:56:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK4 00:28:36.275 13:56:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:28:36.275 13:56:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:36.275 13:56:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:36.275 13:56:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:36.275 13:56:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:36.275 13:56:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:36.275 13:56:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:37.660 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK5 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK5 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:37.660 13:56:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:39.045 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK6 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK6 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:39.045 13:56:31 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:40.459 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:40.460 13:56:32 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:40.460 13:56:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:28:40.460 13:56:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:40.460 13:56:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK7 00:28:40.460 13:56:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:40.460 13:56:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK7 00:28:40.460 13:56:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:28:40.460 13:56:32 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:40.460 13:56:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.460 13:56:32 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:40.460 13:56:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.460 13:56:33 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:40.460 13:56:33 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:41.845 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK8 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK8 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:41.845 13:56:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:42.787 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:42.787 13:56:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:42.787 13:56:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:28:42.787 13:56:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:42.787 13:56:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK9 00:28:42.787 13:56:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:42.787 13:56:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK9 00:28:43.048 13:56:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:28:43.049 13:56:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:43.049 13:56:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:43.049 13:56:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.049 13:56:35 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:43.049 13:56:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:43.049 13:56:35 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:44.433 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK10 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK10 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:44.433 13:56:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:45.817 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK11 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK11 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:45.817 rmmod nvme_rdma 00:28:45.817 rmmod nvme_fabrics 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2218263 ']' 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2218263 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@949 -- # '[' -z 2218263 ']' 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@953 -- # kill -0 2218263 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@954 -- # uname 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2218263 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2218263' 00:28:45.817 killing process with pid 2218263 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@968 -- # kill 2218263 00:28:45.817 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@973 -- # wait 2218263 00:28:46.077 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:46.077 13:56:38 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:46.077 00:28:46.077 real 1m23.937s 00:28:46.077 user 5m43.936s 00:28:46.077 sys 0m17.015s 00:28:46.077 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:46.077 13:56:38 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:46.077 ************************************ 00:28:46.077 END TEST nvmf_multiconnection 00:28:46.077 ************************************ 00:28:46.077 13:56:38 nvmf_rdma -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:28:46.077 13:56:38 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:46.077 13:56:38 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:46.077 13:56:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:46.077 ************************************ 00:28:46.077 START TEST nvmf_initiator_timeout 00:28:46.077 ************************************ 00:28:46.077 13:56:38 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:28:46.338 * Looking for test storage... 00:28:46.338 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:46.338 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.339 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.339 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.339 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:46.339 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:46.339 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:28:46.339 13:56:39 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:28:54.485 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:28:54.485 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.485 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:28:54.485 Found net devices under 0000:98:00.0: mlx_0_0 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:28:54.486 Found net devices under 0000:98:00.1: mlx_0_1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # rdma_device_init 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # uname 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:54.486 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:54.486 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:28:54.486 altname enp152s0f0np0 00:28:54.486 altname ens817f0np0 00:28:54.486 inet 192.168.100.8/24 scope global mlx_0_0 00:28:54.486 valid_lft forever preferred_lft forever 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:54.486 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:54.486 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:28:54.486 altname enp152s0f1np1 00:28:54.486 altname ens817f1np1 00:28:54.486 inet 192.168.100.9/24 scope global mlx_0_1 00:28:54.486 valid_lft forever preferred_lft forever 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:54.486 192.168.100.9' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:54.486 192.168.100.9' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # head -n 1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:54.486 192.168.100.9' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # tail -n +2 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # head -n 1 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:54.486 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2236883 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2236883 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@830 -- # '[' -z 2236883 ']' 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:54.487 13:56:46 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:54.487 [2024-06-11 13:56:46.335503] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:28:54.487 [2024-06-11 13:56:46.335556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.487 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.487 [2024-06-11 13:56:46.397967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:54.487 [2024-06-11 13:56:46.465359] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.487 [2024-06-11 13:56:46.465398] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.487 [2024-06-11 13:56:46.465406] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.487 [2024-06-11 13:56:46.465412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.487 [2024-06-11 13:56:46.465417] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.487 [2024-06-11 13:56:46.465554] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.487 [2024-06-11 13:56:46.465669] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:54.487 [2024-06-11 13:56:46.465825] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.487 [2024-06-11 13:56:46.465826] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@863 -- # return 0 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:54.487 Malloc0 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:54.487 Delay0 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:54.487 [2024-06-11 13:56:47.220211] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1574c10/0x13f4100) succeed. 00:28:54.487 [2024-06-11 13:56:47.233380] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15760c0/0x145f140) succeed. 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:54.487 [2024-06-11 13:56:47.386809] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:54.487 13:56:47 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:28:56.402 13:56:48 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:56.402 13:56:48 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # local i=0 00:28:56.402 13:56:48 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:28:56.402 13:56:48 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:28:56.402 13:56:48 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # sleep 2 00:28:58.312 13:56:50 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:28:58.312 13:56:50 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:28:58.312 13:56:50 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:28:58.312 13:56:50 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:28:58.312 13:56:50 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:28:58.312 13:56:50 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # return 0 00:28:58.312 13:56:50 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2237699 00:28:58.312 13:56:50 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:58.312 13:56:50 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:58.312 [global] 00:28:58.312 thread=1 00:28:58.312 invalidate=1 00:28:58.312 rw=write 00:28:58.312 time_based=1 00:28:58.312 runtime=60 00:28:58.312 ioengine=libaio 00:28:58.312 direct=1 00:28:58.312 bs=4096 00:28:58.312 iodepth=1 00:28:58.312 norandommap=0 00:28:58.312 numjobs=1 00:28:58.312 00:28:58.312 verify_dump=1 00:28:58.312 verify_backlog=512 00:28:58.312 verify_state_save=0 00:28:58.312 do_verify=1 00:28:58.312 verify=crc32c-intel 00:28:58.312 [job0] 00:28:58.312 filename=/dev/nvme0n1 00:28:58.312 Could not set queue depth (nvme0n1) 00:28:58.312 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:58.312 fio-3.35 00:28:58.312 Starting 1 thread 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:01.610 true 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:01.610 true 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:01.610 true 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:01.610 true 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:01.610 13:56:53 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:04.154 true 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:04.154 true 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:04.154 true 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:04.154 true 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:29:04.154 13:56:56 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2237699 00:30:00.431 00:30:00.431 job0: (groupid=0, jobs=1): err= 0: pid=2238033: Tue Jun 11 13:57:51 2024 00:30:00.431 read: IOPS=793, BW=3172KiB/s (3248kB/s)(186MiB/60000msec) 00:30:00.431 slat (usec): min=4, max=13734, avg=15.17, stdev=73.41 00:30:00.431 clat (usec): min=28, max=43356k, avg=1065.47, stdev=198754.35 00:30:00.431 lat (usec): min=83, max=43356k, avg=1080.65, stdev=198754.43 00:30:00.431 clat percentiles (usec): 00:30:00.431 | 1.00th=[ 84], 5.00th=[ 92], 10.00th=[ 98], 20.00th=[ 103], 00:30:00.431 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 116], 60.00th=[ 190], 00:30:00.431 | 70.00th=[ 196], 80.00th=[ 210], 90.00th=[ 229], 95.00th=[ 253], 00:30:00.431 | 99.00th=[ 326], 99.50th=[ 355], 99.90th=[ 408], 99.95th=[ 420], 00:30:00.431 | 99.99th=[ 453] 00:30:00.431 write: IOPS=793, BW=3174KiB/s (3251kB/s)(186MiB/60000msec); 0 zone resets 00:30:00.431 slat (usec): min=7, max=337, avg=18.21, stdev=12.72 00:30:00.431 clat (usec): min=69, max=940, avg=152.87, stdev=62.13 00:30:00.431 lat (usec): min=82, max=973, avg=171.07, stdev=69.71 00:30:00.431 clat percentiles (usec): 00:30:00.431 | 1.00th=[ 82], 5.00th=[ 90], 10.00th=[ 95], 20.00th=[ 101], 00:30:00.431 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 115], 60.00th=[ 190], 00:30:00.431 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 231], 95.00th=[ 258], 00:30:00.431 | 99.00th=[ 326], 99.50th=[ 355], 99.90th=[ 396], 99.95th=[ 412], 00:30:00.431 | 99.99th=[ 433] 00:30:00.431 bw ( KiB/s): min= 672, max=16808, per=100.00%, avg=10842.35, stdev=3410.76, samples=34 00:30:00.431 iops : min= 168, max= 4202, avg=2710.59, stdev=852.69, samples=34 00:30:00.431 lat (usec) : 50=0.01%, 100=16.30%, 250=78.23%, 500=5.46%, 750=0.01% 00:30:00.431 lat (usec) : 1000=0.01% 00:30:00.431 lat (msec) : >=2000=0.01% 00:30:00.431 cpu : usr=1.84%, sys=3.58%, ctx=95207, majf=0, minf=144 00:30:00.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:00.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.431 issued rwts: total=47585,47616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:00.431 00:30:00.431 Run status group 0 (all jobs): 00:30:00.431 READ: bw=3172KiB/s (3248kB/s), 3172KiB/s-3172KiB/s (3248kB/s-3248kB/s), io=186MiB (195MB), run=60000-60000msec 00:30:00.431 WRITE: bw=3174KiB/s (3251kB/s), 3174KiB/s-3174KiB/s (3251kB/s-3251kB/s), io=186MiB (195MB), run=60000-60000msec 00:30:00.431 00:30:00.431 Disk stats (read/write): 00:30:00.431 nvme0n1: ios=47301/47415, merge=0/0, ticks=5633/5544, in_queue=11177, util=99.50% 00:30:00.431 13:57:51 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:00.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # local i=0 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1230 -- # return 0 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:30:00.431 nvmf hotplug test: fio successful as expected 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:30:00.431 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:30:00.432 rmmod nvme_rdma 00:30:00.432 rmmod nvme_fabrics 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2236883 ']' 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2236883 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@949 -- # '[' -z 2236883 ']' 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # kill -0 2236883 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # uname 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2236883 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2236883' 00:30:00.432 killing process with pid 2236883 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # kill 2236883 00:30:00.432 13:57:52 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # wait 2236883 00:30:00.432 13:57:53 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:00.432 13:57:53 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:30:00.432 00:30:00.432 real 1m14.079s 00:30:00.432 user 4m40.823s 00:30:00.432 sys 0m8.232s 00:30:00.432 13:57:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:00.432 13:57:53 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:00.432 ************************************ 00:30:00.432 END TEST nvmf_initiator_timeout 00:30:00.432 ************************************ 00:30:00.432 13:57:53 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:30:00.432 13:57:53 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:30:00.432 13:57:53 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:30:00.432 13:57:53 nvmf_rdma -- nvmf/nvmf.sh@79 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:30:00.432 13:57:53 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:00.432 13:57:53 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:00.432 13:57:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:30:00.432 ************************************ 00:30:00.432 START TEST nvmf_device_removal 00:30:00.432 ************************************ 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1124 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:30:00.432 * Looking for test storage... 00:30:00.432 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@34 -- # set -e 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@36 -- # shopt -s extglob 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@22 -- # CONFIG_CET=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:30:00.432 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@70 -- # CONFIG_FC=n 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@83 -- # CONFIG_URING=n 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:30:00.433 #define SPDK_CONFIG_H 00:30:00.433 #define SPDK_CONFIG_APPS 1 00:30:00.433 #define SPDK_CONFIG_ARCH native 00:30:00.433 #undef SPDK_CONFIG_ASAN 00:30:00.433 #undef SPDK_CONFIG_AVAHI 00:30:00.433 #undef SPDK_CONFIG_CET 00:30:00.433 #define SPDK_CONFIG_COVERAGE 1 00:30:00.433 #define SPDK_CONFIG_CROSS_PREFIX 00:30:00.433 #undef SPDK_CONFIG_CRYPTO 00:30:00.433 #undef SPDK_CONFIG_CRYPTO_MLX5 00:30:00.433 #undef SPDK_CONFIG_CUSTOMOCF 00:30:00.433 #undef SPDK_CONFIG_DAOS 00:30:00.433 #define SPDK_CONFIG_DAOS_DIR 00:30:00.433 #define SPDK_CONFIG_DEBUG 1 00:30:00.433 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:30:00.433 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:30:00.433 #define SPDK_CONFIG_DPDK_INC_DIR 00:30:00.433 #define SPDK_CONFIG_DPDK_LIB_DIR 00:30:00.433 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:30:00.433 #undef SPDK_CONFIG_DPDK_UADK 00:30:00.433 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:30:00.433 #define SPDK_CONFIG_EXAMPLES 1 00:30:00.433 #undef SPDK_CONFIG_FC 00:30:00.433 #define SPDK_CONFIG_FC_PATH 00:30:00.433 #define SPDK_CONFIG_FIO_PLUGIN 1 00:30:00.433 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:30:00.433 #undef SPDK_CONFIG_FUSE 00:30:00.433 #undef SPDK_CONFIG_FUZZER 00:30:00.433 #define SPDK_CONFIG_FUZZER_LIB 00:30:00.433 #undef SPDK_CONFIG_GOLANG 00:30:00.433 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:30:00.433 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:30:00.433 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:30:00.433 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:30:00.433 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:30:00.433 #undef SPDK_CONFIG_HAVE_LIBBSD 00:30:00.433 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:30:00.433 #define SPDK_CONFIG_IDXD 1 00:30:00.433 #define SPDK_CONFIG_IDXD_KERNEL 1 00:30:00.433 #undef SPDK_CONFIG_IPSEC_MB 00:30:00.433 #define SPDK_CONFIG_IPSEC_MB_DIR 00:30:00.433 #define SPDK_CONFIG_ISAL 1 00:30:00.433 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:30:00.433 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:30:00.433 #define SPDK_CONFIG_LIBDIR 00:30:00.433 #undef SPDK_CONFIG_LTO 00:30:00.433 #define SPDK_CONFIG_MAX_LCORES 00:30:00.433 #define SPDK_CONFIG_NVME_CUSE 1 00:30:00.433 #undef SPDK_CONFIG_OCF 00:30:00.433 #define SPDK_CONFIG_OCF_PATH 00:30:00.433 #define SPDK_CONFIG_OPENSSL_PATH 00:30:00.433 #undef SPDK_CONFIG_PGO_CAPTURE 00:30:00.433 #define SPDK_CONFIG_PGO_DIR 00:30:00.433 #undef SPDK_CONFIG_PGO_USE 00:30:00.433 #define SPDK_CONFIG_PREFIX /usr/local 00:30:00.433 #undef SPDK_CONFIG_RAID5F 00:30:00.433 #undef SPDK_CONFIG_RBD 00:30:00.433 #define SPDK_CONFIG_RDMA 1 00:30:00.433 #define SPDK_CONFIG_RDMA_PROV verbs 00:30:00.433 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:30:00.433 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:30:00.433 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:30:00.433 #define SPDK_CONFIG_SHARED 1 00:30:00.433 #undef SPDK_CONFIG_SMA 00:30:00.433 #define SPDK_CONFIG_TESTS 1 00:30:00.433 #undef SPDK_CONFIG_TSAN 00:30:00.433 #define SPDK_CONFIG_UBLK 1 00:30:00.433 #define SPDK_CONFIG_UBSAN 1 00:30:00.433 #undef SPDK_CONFIG_UNIT_TESTS 00:30:00.433 #undef SPDK_CONFIG_URING 00:30:00.433 #define SPDK_CONFIG_URING_PATH 00:30:00.433 #undef SPDK_CONFIG_URING_ZNS 00:30:00.433 #undef SPDK_CONFIG_USDT 00:30:00.433 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:30:00.433 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:30:00.433 #undef SPDK_CONFIG_VFIO_USER 00:30:00.433 #define SPDK_CONFIG_VFIO_USER_DIR 00:30:00.433 #define SPDK_CONFIG_VHOST 1 00:30:00.433 #define SPDK_CONFIG_VIRTIO 1 00:30:00.433 #undef SPDK_CONFIG_VTUNE 00:30:00.433 #define SPDK_CONFIG_VTUNE_DIR 00:30:00.433 #define SPDK_CONFIG_WERROR 1 00:30:00.433 #define SPDK_CONFIG_WPDK_DIR 00:30:00.433 #undef SPDK_CONFIG_XNVME 00:30:00.433 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.433 13:57:53 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@64 -- # TEST_TAG=N/A 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # uname -s 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # PM_OS=Linux 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[0]= 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[1]='sudo -E' 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ Linux == Linux ]] 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@58 -- # : 1 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@62 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@64 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@66 -- # : 1 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@68 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@70 -- # : 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@72 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@74 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@76 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@78 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@80 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@82 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@84 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@86 -- # : 1 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@88 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@90 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@92 -- # : 1 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@94 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@96 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@98 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@100 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@102 -- # : rdma 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@104 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@106 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@108 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@110 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@112 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@114 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@116 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@118 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@120 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@122 -- # : 1 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@124 -- # : 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@126 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@128 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@130 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@132 -- # : 0 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:30:00.434 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@134 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@136 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@138 -- # : 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@140 -- # : true 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@142 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@144 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@146 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@148 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@150 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@152 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@154 -- # : mlx5 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@156 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@158 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@160 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@162 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@164 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@167 -- # : 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@169 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@171 -- # : 0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@200 -- # cat 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:00.435 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # export valgrind= 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # valgrind= 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # uname -s 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@279 -- # MAKE=make 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@299 -- # TEST_MODE= 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@300 -- # for i in "$@" 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@301 -- # case "$i" in 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # [[ -z 2250322 ]] 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # kill -0 2250322 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@331 -- # local mount target_dir 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.AP1Chq 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:30:00.436 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.AP1Chq/tests/target /tmp/spdk.AP1Chq 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # df -T 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=957403136 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4327026688 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=123649548288 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370984448 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=5721436160 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=64672194560 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=13295616 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:00.743 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=25850793984 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=23404544 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=64685133824 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685494272 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=360448 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:30:00.744 * Looking for test storage... 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@368 -- # local target_space new_size 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # mount=/ 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@374 -- # target_space=123649548288 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@381 -- # new_size=7936028672 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:30:00.744 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@389 -- # return 0 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1681 -- # set -o errtrace 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1686 -- # true 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1688 -- # xtrace_fd 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@27 -- # exec 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@29 -- # exec 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@31 -- # xtrace_restore 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@18 -- # set -x 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # uname -s 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@47 -- # : 0 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:00.744 13:57:53 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@18 -- # nvmftestinit 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@285 -- # xtrace_disable 00:30:00.745 13:57:53 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # pci_devs=() 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # net_devs=() 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # e810=() 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # local -ga e810 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # x722=() 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # local -ga x722 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # mlx=() 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # local -ga mlx 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:30:07.339 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:30:07.339 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:30:07.339 Found net devices under 0000:98:00.0: mlx_0_0 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:30:07.339 Found net devices under 0000:98:00.1: mlx_0_1 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # is_hw=yes 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@420 -- # rdma_device_init 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # uname 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@62 -- # modprobe ib_cm 00:30:07.339 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@63 -- # modprobe ib_core 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@64 -- # modprobe ib_umad 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@66 -- # modprobe iw_cm 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@502 -- # allocate_nic_ips 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # get_rdma_if_list 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:07.340 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:30:07.601 10: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:07.601 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:30:07.601 altname enp152s0f0np0 00:30:07.601 altname ens817f0np0 00:30:07.601 inet 192.168.100.8/24 scope global mlx_0_0 00:30:07.601 valid_lft forever preferred_lft forever 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:30:07.601 11: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:30:07.601 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:30:07.601 altname enp152s0f1np1 00:30:07.601 altname ens817f1np1 00:30:07.601 inet 192.168.100.9/24 scope global mlx_0_1 00:30:07.601 valid_lft forever preferred_lft forever 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@422 -- # return 0 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # get_rdma_if_list 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:30:07.601 192.168.100.9' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:30:07.601 192.168.100.9' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # head -n 1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:30:07.601 192.168.100.9' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # tail -n +2 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # head -n 1 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@237 -- # BOND_MASK=24 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:30:07.601 ************************************ 00:30:07.601 START TEST nvmf_device_removal_pci_remove_no_srq 00:30:07.601 ************************************ 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1124 -- # test_remove_and_rescan --no-srq 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@481 -- # nvmfpid=2254073 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@482 -- # waitforlisten 2254073 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@830 -- # '[' -z 2254073 ']' 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:07.601 13:58:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:07.861 [2024-06-11 13:58:00.538806] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:30:07.861 [2024-06-11 13:58:00.538871] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.861 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.861 [2024-06-11 13:58:00.605022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:07.861 [2024-06-11 13:58:00.679769] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.861 [2024-06-11 13:58:00.679809] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.861 [2024-06-11 13:58:00.679817] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.862 [2024-06-11 13:58:00.679823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.862 [2024-06-11 13:58:00.679829] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.862 [2024-06-11 13:58:00.679967] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.862 [2024-06-11 13:58:00.679969] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.432 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:08.432 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@863 -- # return 0 00:30:08.432 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:08.432 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:08.432 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:08.693 [2024-06-11 13:58:01.394141] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x246b7b0/0x246fca0) succeed. 00:30:08.693 [2024-06-11 13:58:01.407343] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x246ccb0/0x24b1330) succeed. 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # get_rdma_if_list 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_0 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo mlx_0_1 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:30:08.693 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:08.694 [2024-06-11 13:58:01.536924] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.694 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:08.955 [2024-06-11 13:58:01.621575] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@53 -- # return 0 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # local dev_names 00:30:08.955 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:30:08.956 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@91 -- # bdevperf_pid=2254195 00:30:08.956 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:08.956 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:08.956 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@94 -- # waitforlisten 2254195 /var/tmp/bdevperf.sock 00:30:08.956 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@830 -- # '[' -z 2254195 ']' 00:30:08.956 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:08.956 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:08.956 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:08.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:08.956 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:08.956 13:58:01 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@863 -- # return 0 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:09.897 Nvme_mlx_0_0n1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:09.897 Nvme_mlx_0_1n1 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=2254536 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@112 -- # sleep 5 00:30:09.897 13:58:02 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0/infiniband 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:15.183 mlx5_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:30:15.183 13:58:07 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:30:15.183 [2024-06-11 13:58:07.796806] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:30:15.183 [2024-06-11 13:58:07.796891] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:30:15.183 [2024-06-11 13:58:07.798383] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:30:15.183 [2024-06-11 13:58:07.798407] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:30:15.183 [2024-06-11 13:58:07.798412] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:30:15.183 [2024-06-11 13:58:07.798416] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798420] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.183 [2024-06-11 13:58:07.798424] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798428] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.183 [2024-06-11 13:58:07.798432] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798436] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.183 [2024-06-11 13:58:07.798439] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798443] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.183 [2024-06-11 13:58:07.798447] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798451] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.183 [2024-06-11 13:58:07.798454] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798458] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.183 [2024-06-11 13:58:07.798462] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798466] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.183 [2024-06-11 13:58:07.798469] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798473] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.183 [2024-06-11 13:58:07.798480] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798484] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.183 [2024-06-11 13:58:07.798488] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798492] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.183 [2024-06-11 13:58:07.798496] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798500] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.183 [2024-06-11 13:58:07.798504] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.183 [2024-06-11 13:58:07.798507] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798511] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798515] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798519] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798523] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798527] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798531] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798534] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798538] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798541] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798545] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798548] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798552] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798556] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798560] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798563] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798567] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798570] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798574] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798577] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798580] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798584] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798588] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798592] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798595] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798599] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798603] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798607] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798610] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798614] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798617] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798621] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798625] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798629] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798633] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798638] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798642] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798646] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798650] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798654] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798657] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798662] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798666] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798669] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798673] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798677] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798680] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798684] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798688] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798693] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798696] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798700] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798703] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798707] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798711] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798714] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798718] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798722] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798726] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798730] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798733] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798737] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798740] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798744] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798748] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798752] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798756] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798759] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798763] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798767] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798770] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798774] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798778] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798781] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798785] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798788] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798792] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798797] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798801] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798805] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798809] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798812] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798816] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798820] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798823] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798827] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798830] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798834] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798837] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798841] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798844] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798849] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798853] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798856] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798860] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798864] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798867] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798871] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798874] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798877] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798881] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798884] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798888] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798891] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798896] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798900] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798903] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798907] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798911] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798915] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798918] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798922] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798926] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798929] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798933] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798936] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.184 [2024-06-11 13:58:07.798940] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.184 [2024-06-11 13:58:07.798944] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.798949] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.798953] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.798957] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.798961] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.798964] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.798968] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.798971] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.798975] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.798979] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.798982] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.798986] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.798990] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.798994] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.798998] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799001] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799005] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799009] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799012] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799015] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799026] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799030] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799033] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799037] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799040] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799044] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799048] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799052] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799056] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799059] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799063] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799066] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799070] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799073] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799077] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799080] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799084] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799087] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799091] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799094] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799098] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799103] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799107] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799112] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799116] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799119] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799125] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799131] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:15.185 [2024-06-11 13:58:07.799137] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:15.185 [2024-06-11 13:58:07.799143] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_0 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:30:23.327 13:58:14 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:30:23.327 [2024-06-11 13:58:15.534360] rdma.c:3263:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x246bba0, err 11. Skip rescan. 00:30:23.327 13:58:15 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:30:23.327 13:58:15 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:30:23.327 13:58:15 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0/net 00:30:23.327 13:58:15 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:30:23.327 13:58:15 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:30:23.327 13:58:15 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:30:23.327 13:58:15 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:30:23.327 13:58:15 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:30:23.327 13:58:15 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:30:23.327 [2024-06-11 13:58:15.899273] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x26752a0/0x246fca0) succeed. 00:30:23.327 [2024-06-11 13:58:15.899328] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:26.626 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:26.626 [2024-06-11 13:58:19.148474] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:26.627 [2024-06-11 13:58:19.148505] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:30:26.627 [2024-06-11 13:58:19.148519] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:30:26.627 [2024-06-11 13:58:19.148530] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1/infiniband 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:26.627 mlx5_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:30:26.627 13:58:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:30:26.627 [2024-06-11 13:58:19.301792] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:30:26.627 [2024-06-11 13:58:19.301859] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:30:26.627 [2024-06-11 13:58:19.310775] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:30:26.627 [2024-06-11 13:58:19.310806] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 96 00:30:26.627 [2024-06-11 13:58:19.310813] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:30:26.627 [2024-06-11 13:58:19.310820] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310826] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310831] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310837] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310842] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310847] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310854] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310859] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310864] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310869] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310875] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310880] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310885] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310890] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310896] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310901] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310906] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310911] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310917] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310922] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310927] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310932] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310937] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310943] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310948] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310954] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310960] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310969] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310974] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310979] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310984] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.310989] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.310995] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311000] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.627 [2024-06-11 13:58:19.311005] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311011] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.627 [2024-06-11 13:58:19.311024] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311031] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.311036] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311041] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.311047] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311052] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.311057] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311062] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.627 [2024-06-11 13:58:19.311067] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311072] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.311077] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311083] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.311089] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311094] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.311100] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311105] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.311110] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311116] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.311121] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.627 [2024-06-11 13:58:19.311126] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.627 [2024-06-11 13:58:19.311131] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311136] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311142] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311148] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311153] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311158] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.628 [2024-06-11 13:58:19.311163] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311169] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311174] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311179] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311184] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311190] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311196] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311203] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311209] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311214] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311219] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311224] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.628 [2024-06-11 13:58:19.311229] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311234] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311240] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311245] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.628 [2024-06-11 13:58:19.311251] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311256] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311262] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311267] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311272] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311277] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311282] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311289] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311294] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311299] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311305] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311310] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311315] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311320] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311325] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311331] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311337] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311342] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311347] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311352] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311358] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311363] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.628 [2024-06-11 13:58:19.311368] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311373] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.628 [2024-06-11 13:58:19.311378] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311384] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311390] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311395] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.628 [2024-06-11 13:58:19.311400] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311405] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311410] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311415] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311421] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311427] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311433] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311438] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311443] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311448] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311453] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311458] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311468] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311475] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311481] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311487] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.628 [2024-06-11 13:58:19.311492] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311497] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311502] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311507] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311513] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311520] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311527] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311532] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311537] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311543] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311549] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311554] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311559] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311564] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.628 [2024-06-11 13:58:19.311570] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311576] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311581] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311586] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311591] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311596] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311601] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311606] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.628 [2024-06-11 13:58:19.311612] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311619] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311624] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311630] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311635] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311640] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311647] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311652] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311659] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311666] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.628 [2024-06-11 13:58:19.311672] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311678] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311683] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311689] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311693] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311698] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311703] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311710] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311716] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311722] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311727] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311732] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311738] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311744] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311750] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311755] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.628 [2024-06-11 13:58:19.311760] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311765] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.628 [2024-06-11 13:58:19.311771] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.628 [2024-06-11 13:58:19.311777] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.629 [2024-06-11 13:58:19.311782] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.629 [2024-06-11 13:58:19.311789] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.629 [2024-06-11 13:58:19.311795] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.629 [2024-06-11 13:58:19.311800] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:30:26.629 [2024-06-11 13:58:19.311805] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.629 [2024-06-11 13:58:19.311811] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.629 [2024-06-11 13:58:19.311817] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.629 [2024-06-11 13:58:19.311823] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.629 [2024-06-11 13:58:19.311829] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.629 [2024-06-11 13:58:19.311834] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.629 [2024-06-11 13:58:19.311839] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.629 [2024-06-11 13:58:19.311844] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.629 [2024-06-11 13:58:19.311851] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.629 [2024-06-11 13:58:19.311857] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.629 [2024-06-11 13:58:19.311862] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.629 [2024-06-11 13:58:19.311867] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:26.629 [2024-06-11 13:58:19.311873] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:30:26.629 [2024-06-11 13:58:19.311878] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep mlx5_1 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:30:34.771 13:58:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:30:34.771 [2024-06-11 13:58:27.216781] rdma.c:3263:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x24f2880, err 11. Skip rescan. 00:30:34.771 13:58:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:30:34.771 13:58:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:30:34.771 13:58:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1/net 00:30:34.771 13:58:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:30:34.771 13:58:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:30:34.771 13:58:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:30:34.771 13:58:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:30:34.771 13:58:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:30:34.771 13:58:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:30:34.771 [2024-06-11 13:58:27.562288] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x26750d0/0x24b1330) succeed. 00:30:34.771 [2024-06-11 13:58:27.562348] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:30:38.074 [2024-06-11 13:58:30.913025] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:30:38.074 [2024-06-11 13:58:30.913062] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:30:38.074 [2024-06-11 13:58:30.913074] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:30:38.074 [2024-06-11 13:58:30.913084] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@200 -- # stop_bdevperf 00:30:38.074 13:58:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@116 -- # wait 2254536 00:31:45.889 0 00:31:45.889 13:59:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@118 -- # killprocess 2254195 00:31:45.889 13:59:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@949 -- # '[' -z 2254195 ']' 00:31:45.889 13:59:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # kill -0 2254195 00:31:45.889 13:59:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # uname 00:31:45.889 13:59:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:45.889 13:59:32 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2254195 00:31:45.889 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:31:45.889 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:31:45.889 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2254195' 00:31:45.889 killing process with pid 2254195 00:31:45.889 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@968 -- # kill 2254195 00:31:45.889 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@973 -- # wait 2254195 00:31:45.889 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@119 -- # bdevperf_pid= 00:31:45.889 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:31:45.889 [2024-06-11 13:58:01.675852] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:31:45.889 [2024-06-11 13:58:01.675903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2254195 ] 00:31:45.889 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.889 [2024-06-11 13:58:01.726923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.889 [2024-06-11 13:58:01.779757] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:45.889 Running I/O for 90 seconds... 00:31:45.889 [2024-06-11 13:58:07.794978] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:31:45.889 [2024-06-11 13:58:07.795013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.889 [2024-06-11 13:58:07.795024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:31:45.889 [2024-06-11 13:58:07.795031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.889 [2024-06-11 13:58:07.795037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:31:45.889 [2024-06-11 13:58:07.795043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.889 [2024-06-11 13:58:07.795048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:31:45.889 [2024-06-11 13:58:07.795053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.889 [2024-06-11 13:58:07.795059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:31:45.889 [2024-06-11 13:58:07.797250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.889 [2024-06-11 13:58:07.797262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:31:45.889 [2024-06-11 13:58:07.797282] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:31:45.889 [2024-06-11 13:58:07.804977] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.889 [2024-06-11 13:58:07.815001] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.889 [2024-06-11 13:58:07.825051] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.889 [2024-06-11 13:58:07.835077] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.889 [2024-06-11 13:58:07.845233] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.889 [2024-06-11 13:58:07.855264] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.889 [2024-06-11 13:58:07.865290] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.889 [2024-06-11 13:58:07.875313] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.885337] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.895362] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.905387] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.915413] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.925438] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.935463] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.945489] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.955514] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.965540] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.975566] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.985591] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:07.995617] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.005644] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.015667] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.025692] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.035718] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.045744] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.055768] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.065793] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.075817] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.085841] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.095868] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.105893] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.115917] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.126097] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.136135] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.146206] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.156232] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.166595] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.176703] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.186820] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.196844] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.207036] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.217059] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.227084] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.237452] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.247477] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.257884] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.267910] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.277970] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.287995] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.298059] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.308083] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.318109] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.328449] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.338471] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.348691] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.358849] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.368876] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.378899] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.388925] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.398949] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.408975] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.419000] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.429026] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.439050] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.449075] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.459099] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.469126] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.479152] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.489178] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.499203] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.509229] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.519254] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.529278] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.539302] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.549327] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.559351] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.569377] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.579401] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.589426] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.599450] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.609477] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.619501] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.629525] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.639552] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.649575] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.659683] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.669706] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.679730] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.689786] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.700078] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.710101] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.720128] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.730153] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.740247] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.750273] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.760299] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.770376] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.780401] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.890 [2024-06-11 13:58:08.790426] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.891 [2024-06-11 13:58:08.800008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e0000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077de000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077dc000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077da000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d8000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.891 [2024-06-11 13:58:08.800460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x181800 00:31:45.891 [2024-06-11 13:58:08.800466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.892 [2024-06-11 13:58:08.800894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x181800 00:31:45.892 [2024-06-11 13:58:08.800899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.800906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.800911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.800917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.800922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.800929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.800933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.800940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.800945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.800952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.800957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.800964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.800969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.800975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.800980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.800986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.800992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.893 [2024-06-11 13:58:08.801289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x181800 00:31:45.893 [2024-06-11 13:58:08.801294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.801301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.801305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.801312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.801316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.801323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.801328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.801335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.801341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.801347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.801352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.801358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.801363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.801369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.801374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.809248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x181800 00:31:45.894 [2024-06-11 13:58:08.809252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.821273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:45.894 [2024-06-11 13:58:08.821283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:45.894 [2024-06-11 13:58:08.821288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22520 len:8 PRP1 0x0 PRP2 0x0 00:31:45.894 [2024-06-11 13:58:08.821293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.894 [2024-06-11 13:58:08.822922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:31:45.894 [2024-06-11 13:58:08.823300] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:31:45.894 [2024-06-11 13:58:08.823312] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.894 [2024-06-11 13:58:08.823316] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:31:45.894 [2024-06-11 13:58:08.823327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.894 [2024-06-11 13:58:08.823332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:31:45.894 [2024-06-11 13:58:08.823340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:31:45.894 [2024-06-11 13:58:08.823346] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:31:45.894 [2024-06-11 13:58:08.823351] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:31:45.894 [2024-06-11 13:58:08.823366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.894 [2024-06-11 13:58:08.823371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:31:45.894 [2024-06-11 13:58:09.825856] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:31:45.894 [2024-06-11 13:58:09.825873] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.894 [2024-06-11 13:58:09.825878] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:31:45.894 [2024-06-11 13:58:09.825888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.894 [2024-06-11 13:58:09.825893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:31:45.894 [2024-06-11 13:58:09.825902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:31:45.894 [2024-06-11 13:58:09.825907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:31:45.894 [2024-06-11 13:58:09.825915] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:31:45.894 [2024-06-11 13:58:09.825929] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.895 [2024-06-11 13:58:09.825934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:31:45.895 [2024-06-11 13:58:10.828346] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:31:45.895 [2024-06-11 13:58:10.828374] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.895 [2024-06-11 13:58:10.828379] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:31:45.895 [2024-06-11 13:58:10.828389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.895 [2024-06-11 13:58:10.828395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:31:45.895 [2024-06-11 13:58:10.828403] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:31:45.895 [2024-06-11 13:58:10.828408] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:31:45.895 [2024-06-11 13:58:10.828414] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:31:45.895 [2024-06-11 13:58:10.828429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.895 [2024-06-11 13:58:10.828435] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:31:45.895 [2024-06-11 13:58:11.830996] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:31:45.895 [2024-06-11 13:58:11.831022] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.895 [2024-06-11 13:58:11.831028] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:31:45.895 [2024-06-11 13:58:11.831038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.895 [2024-06-11 13:58:11.831044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:31:45.895 [2024-06-11 13:58:11.831056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:31:45.895 [2024-06-11 13:58:11.831061] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:31:45.895 [2024-06-11 13:58:11.831066] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:31:45.895 [2024-06-11 13:58:11.831081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.895 [2024-06-11 13:58:11.831086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:31:45.895 [2024-06-11 13:58:13.835823] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.895 [2024-06-11 13:58:13.835847] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:31:45.895 [2024-06-11 13:58:13.835861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.895 [2024-06-11 13:58:13.835868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:31:45.895 [2024-06-11 13:58:13.835876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:31:45.895 [2024-06-11 13:58:13.835881] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:31:45.895 [2024-06-11 13:58:13.835890] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:31:45.895 [2024-06-11 13:58:13.835907] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.895 [2024-06-11 13:58:13.835912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:31:45.895 [2024-06-11 13:58:15.840637] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.895 [2024-06-11 13:58:15.840655] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:31:45.895 [2024-06-11 13:58:15.840670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.895 [2024-06-11 13:58:15.840676] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:31:45.895 [2024-06-11 13:58:15.840684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:31:45.895 [2024-06-11 13:58:15.840689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:31:45.895 [2024-06-11 13:58:15.840695] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:31:45.895 [2024-06-11 13:58:15.840709] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.895 [2024-06-11 13:58:15.840714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:31:45.895 [2024-06-11 13:58:17.845573] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.895 [2024-06-11 13:58:17.845591] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:31:45.895 [2024-06-11 13:58:17.845604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.895 [2024-06-11 13:58:17.845609] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:31:45.895 [2024-06-11 13:58:17.845618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:31:45.895 [2024-06-11 13:58:17.845622] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:31:45.895 [2024-06-11 13:58:17.845628] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:31:45.895 [2024-06-11 13:58:17.845641] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.895 [2024-06-11 13:58:17.845646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:31:45.895 [2024-06-11 13:58:19.305554] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:31:45.895 [2024-06-11 13:58:19.305576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.895 [2024-06-11 13:58:19.305583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:31:45.895 [2024-06-11 13:58:19.305590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.895 [2024-06-11 13:58:19.305595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:31:45.895 [2024-06-11 13:58:19.305601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.895 [2024-06-11 13:58:19.305606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:31:45.895 [2024-06-11 13:58:19.305612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:45.895 [2024-06-11 13:58:19.305620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:16 sqhd:13b9 p:0 m:0 dnr:0 00:31:45.895 [2024-06-11 13:58:19.316890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.895 [2024-06-11 13:58:19.316911] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:31:45.895 [2024-06-11 13:58:19.316934] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:31:45.895 [2024-06-11 13:58:19.316967] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.326973] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.337001] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.347027] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.357052] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.367076] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.377104] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.387132] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.397158] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.407184] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.417209] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.427236] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.437262] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.447289] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.457316] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.467343] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.477370] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.487394] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.497419] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.507444] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.517469] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.527492] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.537518] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.547543] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.557571] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.567595] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.577620] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.587644] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.597670] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.895 [2024-06-11 13:58:19.607696] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.617722] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.627746] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.637770] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.647797] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.657824] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.667852] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.677877] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.687903] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.697928] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.707952] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.717977] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.728002] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.738026] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.748052] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.758078] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.768104] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.778129] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.788153] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.798179] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.808205] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.818231] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.828255] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.838283] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.848307] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.850565] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.896 [2024-06-11 13:58:19.850575] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:31:45.896 [2024-06-11 13:58:19.850587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.896 [2024-06-11 13:58:19.850592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:31:45.896 [2024-06-11 13:58:19.850604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:31:45.896 [2024-06-11 13:58:19.850609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:31:45.896 [2024-06-11 13:58:19.850616] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:31:45.896 [2024-06-11 13:58:19.850629] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.896 [2024-06-11 13:58:19.850634] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:31:45.896 [2024-06-11 13:58:19.858327] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.868352] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.878378] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.888402] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.898428] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.908453] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.918478] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.928503] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.938528] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.948553] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.958579] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.968603] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.978629] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.988654] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:19.998679] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.008706] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.018730] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.028756] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.038782] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.048806] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.058830] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.068854] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.078879] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.088903] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.098927] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.108952] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.118978] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.129004] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.139029] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.149053] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.159080] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.169106] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.179132] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.189157] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.199183] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.209208] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.219234] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.229258] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.239284] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.249310] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.259334] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.269360] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.279385] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.289411] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.299436] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.309462] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:45.896 [2024-06-11 13:58:20.319292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x1bf300 00:31:45.896 [2024-06-11 13:58:20.319302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.896 [2024-06-11 13:58:20.319316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x1bf300 00:31:45.896 [2024-06-11 13:58:20.319322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.896 [2024-06-11 13:58:20.319331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x1bf300 00:31:45.896 [2024-06-11 13:58:20.319337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.896 [2024-06-11 13:58:20.319343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x1bf300 00:31:45.896 [2024-06-11 13:58:20.319349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.896 [2024-06-11 13:58:20.319355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x1bf300 00:31:45.896 [2024-06-11 13:58:20.319360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007958000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007956000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007954000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007952000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007950000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794e000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794c000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794a000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007948000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1bf300 00:31:45.897 [2024-06-11 13:58:20.319785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.897 [2024-06-11 13:58:20.319792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.319992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.319998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.320004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.320015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.320030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.320042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.320053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1bf300 00:31:45.898 [2024-06-11 13:58:20.320066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.898 [2024-06-11 13:58:20.320077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.898 [2024-06-11 13:58:20.320088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.898 [2024-06-11 13:58:20.320099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.898 [2024-06-11 13:58:20.320111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.898 [2024-06-11 13:58:20.320122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.898 [2024-06-11 13:58:20.320134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.898 [2024-06-11 13:58:20.320145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.898 [2024-06-11 13:58:20.320151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.899 [2024-06-11 13:58:20.320595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.899 [2024-06-11 13:58:20.320602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.320777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:45.900 [2024-06-11 13:58:20.320782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32584 cdw0:c86c4210 sqhd:6540 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.332824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:45.900 [2024-06-11 13:58:20.332835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:45.900 [2024-06-11 13:58:20.332840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33264 len:8 PRP1 0x0 PRP2 0x0 00:31:45.900 [2024-06-11 13:58:20.332845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.900 [2024-06-11 13:58:20.332880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:31:45.900 [2024-06-11 13:58:20.333092] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:31:45.900 [2024-06-11 13:58:20.333102] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.900 [2024-06-11 13:58:20.333106] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:31:45.900 [2024-06-11 13:58:20.333116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.900 [2024-06-11 13:58:20.333123] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:31:45.900 [2024-06-11 13:58:20.333133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:31:45.900 [2024-06-11 13:58:20.333140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:31:45.900 [2024-06-11 13:58:20.333146] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:31:45.900 [2024-06-11 13:58:20.333159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.900 [2024-06-11 13:58:20.333163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:31:45.900 [2024-06-11 13:58:20.906757] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:45.900 [2024-06-11 13:58:21.335661] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:31:45.900 [2024-06-11 13:58:21.335673] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.900 [2024-06-11 13:58:21.335678] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:31:45.900 [2024-06-11 13:58:21.335688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.900 [2024-06-11 13:58:21.335693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:31:45.900 [2024-06-11 13:58:21.335700] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:31:45.900 [2024-06-11 13:58:21.335705] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:31:45.900 [2024-06-11 13:58:21.335710] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:31:45.900 [2024-06-11 13:58:21.335722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.900 [2024-06-11 13:58:21.335727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:31:45.900 [2024-06-11 13:58:22.339720] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:31:45.900 [2024-06-11 13:58:22.339743] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.900 [2024-06-11 13:58:22.339748] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:31:45.900 [2024-06-11 13:58:22.339759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.900 [2024-06-11 13:58:22.339765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:31:45.900 [2024-06-11 13:58:22.339773] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:31:45.900 [2024-06-11 13:58:22.339778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:31:45.900 [2024-06-11 13:58:22.339783] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:31:45.900 [2024-06-11 13:58:22.339798] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.900 [2024-06-11 13:58:22.339802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:31:45.900 [2024-06-11 13:58:23.342656] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:31:45.900 [2024-06-11 13:58:23.342685] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.900 [2024-06-11 13:58:23.342694] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:31:45.900 [2024-06-11 13:58:23.342706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.900 [2024-06-11 13:58:23.342712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:31:45.900 [2024-06-11 13:58:23.342721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:31:45.900 [2024-06-11 13:58:23.342726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:31:45.900 [2024-06-11 13:58:23.342731] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:31:45.900 [2024-06-11 13:58:23.342748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.900 [2024-06-11 13:58:23.342753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:31:45.900 [2024-06-11 13:58:25.348430] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.900 [2024-06-11 13:58:25.348457] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:31:45.900 [2024-06-11 13:58:25.348474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.900 [2024-06-11 13:58:25.348481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:31:45.900 [2024-06-11 13:58:25.348504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:31:45.900 [2024-06-11 13:58:25.348509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:31:45.900 [2024-06-11 13:58:25.348515] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:31:45.900 [2024-06-11 13:58:25.348540] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.901 [2024-06-11 13:58:25.348545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:31:45.901 [2024-06-11 13:58:27.355410] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.901 [2024-06-11 13:58:27.355432] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:31:45.901 [2024-06-11 13:58:27.355449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.901 [2024-06-11 13:58:27.355455] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:31:45.901 [2024-06-11 13:58:27.355806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:31:45.901 [2024-06-11 13:58:27.355814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:31:45.901 [2024-06-11 13:58:27.355820] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:31:45.901 [2024-06-11 13:58:27.355846] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.901 [2024-06-11 13:58:27.355852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:31:45.901 [2024-06-11 13:58:29.363880] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.901 [2024-06-11 13:58:29.363913] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:31:45.901 [2024-06-11 13:58:29.363932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.901 [2024-06-11 13:58:29.363938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:31:45.901 [2024-06-11 13:58:29.363955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:31:45.901 [2024-06-11 13:58:29.363960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:31:45.901 [2024-06-11 13:58:29.363965] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:31:45.901 [2024-06-11 13:58:29.363987] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.901 [2024-06-11 13:58:29.363992] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:31:45.901 [2024-06-11 13:58:31.370067] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:31:45.901 [2024-06-11 13:58:31.370094] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:31:45.901 [2024-06-11 13:58:31.370112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:31:45.901 [2024-06-11 13:58:31.370118] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:31:45.901 [2024-06-11 13:58:31.370125] bdev_nvme.c:2884:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:31:45.901 [2024-06-11 13:58:31.370142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:31:45.901 [2024-06-11 13:58:31.370147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:31:45.901 [2024-06-11 13:58:31.370153] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:31:45.901 [2024-06-11 13:58:31.370175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:45.901 [2024-06-11 13:58:31.370202] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:31:45.901 [2024-06-11 13:58:31.429530] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:45.901 00:31:45.901 Latency(us) 00:31:45.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.901 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:45.901 Verification LBA range: start 0x0 length 0x8000 00:31:45.901 Nvme_mlx_0_0n1 : 90.01 12525.03 48.93 0.00 0.00 10199.56 535.89 14092861.44 00:31:45.901 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:45.901 Verification LBA range: start 0x0 length 0x8000 00:31:45.901 Nvme_mlx_0_1n1 : 90.01 8648.99 33.79 0.00 0.00 14781.72 2443.95 13086228.48 00:31:45.901 =================================================================================================================== 00:31:45.901 Total : 21174.01 82.71 0.00 0.00 12071.29 535.89 14092861.44 00:31:45.901 Received shutdown signal, test time was about 90.000000 seconds 00:31:45.901 00:31:45.901 Latency(us) 00:31:45.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.901 =================================================================================================================== 00:31:45.901 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@202 -- # killprocess 2254073 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@949 -- # '[' -z 2254073 ']' 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # kill -0 2254073 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # uname 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2254073 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2254073' 00:31:45.901 killing process with pid 2254073 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@968 -- # kill 2254073 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@973 -- # wait 2254073 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@203 -- # nvmfpid= 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@205 -- # return 0 00:31:45.901 00:31:45.901 real 1m33.001s 00:31:45.901 user 4m20.316s 00:31:45.901 sys 0m5.255s 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:31:45.901 ************************************ 00:31:45.901 END TEST nvmf_device_removal_pci_remove_no_srq 00:31:45.901 ************************************ 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:31:45.901 ************************************ 00:31:45.901 START TEST nvmf_device_removal_pci_remove 00:31:45.901 ************************************ 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1124 -- # test_remove_and_rescan 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@481 -- # nvmfpid=2271946 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@482 -- # waitforlisten 2271946 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@830 -- # '[' -z 2271946 ']' 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:45.901 13:59:33 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.902 [2024-06-11 13:59:33.607141] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:31:45.902 [2024-06-11 13:59:33.607189] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:45.902 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.902 [2024-06-11 13:59:33.668414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:45.902 [2024-06-11 13:59:33.732584] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:45.902 [2024-06-11 13:59:33.732625] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:45.902 [2024-06-11 13:59:33.732632] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:45.902 [2024-06-11 13:59:33.732638] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:45.902 [2024-06-11 13:59:33.732644] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:45.902 [2024-06-11 13:59:33.732791] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.902 [2024-06-11 13:59:33.732791] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@863 -- # return 0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.902 [2024-06-11 13:59:34.433763] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18057b0/0x1809ca0) succeed. 00:31:45.902 [2024-06-11 13:59:34.446974] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1806cb0/0x184b330) succeed. 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # get_rdma_if_list 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo mlx_0_1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.902 [2024-06-11 13:59:34.632565] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:31:45.902 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.903 [2024-06-11 13:59:34.701300] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@53 -- # return 0 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # local dev_names 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@91 -- # bdevperf_pid=2272315 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@94 -- # waitforlisten 2272315 /var/tmp/bdevperf.sock 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@830 -- # '[' -z 2272315 ']' 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:45.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:45.903 13:59:34 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@863 -- # return 0 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.903 Nvme_mlx_0_0n1 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:31:45.903 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:45.904 Nvme_mlx_0_1n1 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=2272477 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@112 -- # sleep 5 00:31:45.904 13:59:35 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0/infiniband 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.453 mlx5_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:31:48.453 13:59:40 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.0/net/mlx_0_0/device 00:31:48.453 [2024-06-11 13:59:40.882576] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:31:48.453 [2024-06-11 13:59:40.882652] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:31:48.453 [2024-06-11 13:59:40.885471] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:31:48.453 [2024-06-11 13:59:40.885499] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 32 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_0 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:31:55.047 13:59:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:31:55.991 13:59:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:31:55.991 13:59:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:31:55.991 13:59:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.0/net 00:31:55.991 13:59:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:31:55.991 13:59:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:31:55.991 13:59:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:31:55.991 13:59:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:31:55.991 13:59:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:31:55.991 13:59:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:31:56.252 [2024-06-11 13:59:49.043100] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18086d0/0x1809ca0) succeed. 00:31:56.252 [2024-06-11 13:59:49.043156] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:59.558 [2024-06-11 13:59:52.278967] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:59.558 [2024-06-11 13:59:52.279002] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:31:59.558 [2024-06-11 13:59:52.279015] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:31:59.558 [2024-06-11 13:59:52.279035] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1/infiniband 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:59.558 mlx5_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:31:59.558 13:59:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:98:00.1/net/mlx_0_1/device 00:31:59.558 [2024-06-11 13:59:52.433761] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:31:59.558 [2024-06-11 13:59:52.433830] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:31:59.558 [2024-06-11 13:59:52.443126] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:31:59.558 [2024-06-11 13:59:52.443200] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 129 00:32:07.704 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep mlx5_1 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:32:07.705 13:59:59 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:32:07.705 [2024-06-11 14:00:00.277541] rdma.c:3263:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x17f97e0, err 11. Skip rescan. 00:32:07.705 14:00:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:32:07.705 14:00:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:32:07.705 14:00:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:97/0000:97:02.0/0000:98:00.1/net 00:32:07.705 14:00:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:32:07.705 14:00:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:32:07.705 14:00:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:32:07.705 14:00:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:32:07.705 14:00:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:32:07.705 14:00:00 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:32:07.967 [2024-06-11 14:00:00.655245] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1808ac0/0x184b330) succeed. 00:32:07.967 [2024-06-11 14:00:00.655319] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:32:11.261 [2024-06-11 14:00:03.954383] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:32:11.261 [2024-06-11 14:00:03.954418] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:32:11.261 [2024-06-11 14:00:03.954430] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:32:11.261 [2024-06-11 14:00:03.954440] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@200 -- # stop_bdevperf 00:32:11.261 14:00:03 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@116 -- # wait 2272477 00:33:19.021 0 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@118 -- # killprocess 2272315 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@949 -- # '[' -z 2272315 ']' 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # kill -0 2272315 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # uname 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2272315 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2272315' 00:33:19.021 killing process with pid 2272315 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@968 -- # kill 2272315 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@973 -- # wait 2272315 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@119 -- # bdevperf_pid= 00:33:19.021 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:33:19.021 [2024-06-11 13:59:34.756636] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:33:19.021 [2024-06-11 13:59:34.756687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272315 ] 00:33:19.021 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.021 [2024-06-11 13:59:34.806739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.021 [2024-06-11 13:59:34.859626] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:19.021 Running I/O for 90 seconds... 00:33:19.021 [2024-06-11 13:59:40.876888] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:33:19.021 [2024-06-11 13:59:40.876920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.021 [2024-06-11 13:59:40.876928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:33:19.021 [2024-06-11 13:59:40.876935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.021 [2024-06-11 13:59:40.876941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:33:19.021 [2024-06-11 13:59:40.876947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.021 [2024-06-11 13:59:40.876952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:33:19.021 [2024-06-11 13:59:40.876958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.021 [2024-06-11 13:59:40.876963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:33:19.021 [2024-06-11 13:59:40.879125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.021 [2024-06-11 13:59:40.879137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:33:19.021 [2024-06-11 13:59:40.879156] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:33:19.021 [2024-06-11 13:59:40.886887] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:40.896912] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:40.906935] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:40.917252] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:40.927941] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:40.937983] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:40.948008] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:40.958051] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:40.968074] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:40.978100] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:40.988125] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:40.998150] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.008175] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.018200] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.028224] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.038250] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.048273] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.058299] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.068324] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.078348] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.088373] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.098399] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.108426] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.118451] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.128475] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.138501] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.148527] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.158899] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.168918] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.179210] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.021 [2024-06-11 13:59:41.189236] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.199973] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.210171] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.220257] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.230278] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.240972] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.251087] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.261111] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.271137] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.281310] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.291498] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.301523] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.312030] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.322054] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.332079] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.342105] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.352327] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.362659] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.372686] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.382710] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.393202] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.403469] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.413493] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.423519] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.434178] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.444477] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.455135] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.465431] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.475456] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.485721] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.495744] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.506388] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.516415] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.526439] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.536463] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.546490] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.556516] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.566543] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.576566] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.586590] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.596616] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.606640] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.616665] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.626689] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.636715] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.646740] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.656764] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.666789] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.676813] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.686837] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.696863] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.706888] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.716912] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.726935] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.737218] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.747237] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.757263] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.767288] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.777804] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.787830] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.797856] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.807880] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.818294] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.828321] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.838980] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.849320] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.859344] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.869371] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.879613] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.022 [2024-06-11 13:59:41.881929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x1810ef 00:33:19.022 [2024-06-11 13:59:41.881942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.022 [2024-06-11 13:59:41.881959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x1810ef 00:33:19.022 [2024-06-11 13:59:41.881965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.022 [2024-06-11 13:59:41.881976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x1810ef 00:33:19.022 [2024-06-11 13:59:41.881981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.022 [2024-06-11 13:59:41.881992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x1810ef 00:33:19.022 [2024-06-11 13:59:41.881997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.022 [2024-06-11 13:59:41.882007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x1810ef 00:33:19.022 [2024-06-11 13:59:41.882013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.022 [2024-06-11 13:59:41.882027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x1810ef 00:33:19.022 [2024-06-11 13:59:41.882032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.022 [2024-06-11 13:59:41.882042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x1810ef 00:33:19.022 [2024-06-11 13:59:41.882047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.023 [2024-06-11 13:59:41.882599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x1810ef 00:33:19.023 [2024-06-11 13:59:41.882604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.882919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x1810ef 00:33:19.024 [2024-06-11 13:59:41.882924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.894917] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:33:19.024 [2024-06-11 13:59:41.894963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:19.024 [2024-06-11 13:59:41.894968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:19.024 [2024-06-11 13:59:41.894974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58136 len:8 PRP1 0x0 PRP2 0x0 00:33:19.024 [2024-06-11 13:59:41.894979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.024 [2024-06-11 13:59:41.896656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:33:19.024 [2024-06-11 13:59:41.896829] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:33:19.024 [2024-06-11 13:59:41.896838] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.024 [2024-06-11 13:59:41.896843] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:33:19.024 [2024-06-11 13:59:41.896853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.024 [2024-06-11 13:59:41.896859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:33:19.024 [2024-06-11 13:59:41.896870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:33:19.024 [2024-06-11 13:59:41.896876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:33:19.024 [2024-06-11 13:59:41.896882] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:33:19.024 [2024-06-11 13:59:41.896896] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.024 [2024-06-11 13:59:41.896901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:33:19.024 [2024-06-11 13:59:42.899324] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:33:19.024 [2024-06-11 13:59:42.899345] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.024 [2024-06-11 13:59:42.899349] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:33:19.024 [2024-06-11 13:59:42.899361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.024 [2024-06-11 13:59:42.899366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:33:19.024 [2024-06-11 13:59:42.899375] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:33:19.024 [2024-06-11 13:59:42.899380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:33:19.024 [2024-06-11 13:59:42.899385] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:33:19.024 [2024-06-11 13:59:42.899401] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.024 [2024-06-11 13:59:42.899406] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:33:19.024 [2024-06-11 13:59:43.901901] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:33:19.024 [2024-06-11 13:59:43.901923] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.024 [2024-06-11 13:59:43.901929] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:33:19.024 [2024-06-11 13:59:43.901941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.024 [2024-06-11 13:59:43.901947] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:33:19.024 [2024-06-11 13:59:43.901955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:33:19.025 [2024-06-11 13:59:43.901960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:33:19.025 [2024-06-11 13:59:43.901966] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:33:19.025 [2024-06-11 13:59:43.901980] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.025 [2024-06-11 13:59:43.901985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:33:19.025 [2024-06-11 13:59:44.904563] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:33:19.025 [2024-06-11 13:59:44.904584] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.025 [2024-06-11 13:59:44.904589] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:33:19.025 [2024-06-11 13:59:44.904599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.025 [2024-06-11 13:59:44.904608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:33:19.025 [2024-06-11 13:59:44.904616] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:33:19.025 [2024-06-11 13:59:44.904621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:33:19.025 [2024-06-11 13:59:44.904626] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:33:19.025 [2024-06-11 13:59:44.904641] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.025 [2024-06-11 13:59:44.904646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:33:19.025 [2024-06-11 13:59:46.909955] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.025 [2024-06-11 13:59:46.909977] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:33:19.025 [2024-06-11 13:59:46.909992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.025 [2024-06-11 13:59:46.909998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:33:19.025 [2024-06-11 13:59:46.910007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:33:19.025 [2024-06-11 13:59:46.910011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:33:19.025 [2024-06-11 13:59:46.910019] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:33:19.025 [2024-06-11 13:59:46.910035] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.025 [2024-06-11 13:59:46.910040] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:33:19.025 [2024-06-11 13:59:48.915448] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.025 [2024-06-11 13:59:48.915467] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:33:19.025 [2024-06-11 13:59:48.915480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.025 [2024-06-11 13:59:48.915487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:33:19.025 [2024-06-11 13:59:48.915495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:33:19.025 [2024-06-11 13:59:48.915500] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:33:19.025 [2024-06-11 13:59:48.915507] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:33:19.025 [2024-06-11 13:59:48.915520] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.025 [2024-06-11 13:59:48.915525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:33:19.025 [2024-06-11 13:59:50.920257] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.025 [2024-06-11 13:59:50.920281] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:33:19.025 [2024-06-11 13:59:50.920296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.025 [2024-06-11 13:59:50.920301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:33:19.025 [2024-06-11 13:59:50.920310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:33:19.025 [2024-06-11 13:59:50.920315] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:33:19.025 [2024-06-11 13:59:50.920323] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:33:19.025 [2024-06-11 13:59:50.920341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.025 [2024-06-11 13:59:50.920346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:33:19.025 [2024-06-11 13:59:52.432649] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:33:19.025 [2024-06-11 13:59:52.432671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.025 [2024-06-11 13:59:52.432679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:33:19.025 [2024-06-11 13:59:52.432685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.025 [2024-06-11 13:59:52.432690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:33:19.025 [2024-06-11 13:59:52.432696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.025 [2024-06-11 13:59:52.432702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:33:19.025 [2024-06-11 13:59:52.432708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:19.025 [2024-06-11 13:59:52.432713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32739 cdw0:16 sqhd:93b9 p:0 m:0 dnr:0 00:33:19.025 [2024-06-11 13:59:52.444886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.025 [2024-06-11 13:59:52.444903] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:33:19.025 [2024-06-11 13:59:52.444923] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:33:19.025 [2024-06-11 13:59:52.444952] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.454962] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.464988] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.475013] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.485040] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.495068] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.505094] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.515119] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.525144] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.535171] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.545197] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.555221] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.565245] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.575273] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.585299] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.595326] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.605353] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.615380] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.025 [2024-06-11 13:59:52.625406] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.635432] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.645460] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.655485] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.665512] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.675537] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.685562] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.695587] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.705614] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.715638] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.725666] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.735693] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.745717] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.755741] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.765768] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.775795] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.785820] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.795847] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.805871] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.815898] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.825925] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.835951] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.845975] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.856000] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.866028] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.876052] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.886078] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.896103] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.906127] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.916151] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.925076] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.026 [2024-06-11 13:59:52.925083] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:33:19.026 [2024-06-11 13:59:52.925097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.026 [2024-06-11 13:59:52.925102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:33:19.026 [2024-06-11 13:59:52.925111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:33:19.026 [2024-06-11 13:59:52.925116] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:33:19.026 [2024-06-11 13:59:52.925122] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:33:19.026 [2024-06-11 13:59:52.925137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.026 [2024-06-11 13:59:52.925142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:33:19.026 [2024-06-11 13:59:52.926175] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.936198] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.946223] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.956247] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.966271] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.976295] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.986322] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:52.996347] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.006372] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.016395] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.026420] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.036446] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.046470] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.056496] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.066521] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.076544] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.086568] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.096594] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.106620] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.116644] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.126669] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.136695] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.146720] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.156746] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.166770] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.176795] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.186820] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.196844] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.206869] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.216894] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.226920] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.236944] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.246968] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.256994] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.267022] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.277048] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.287074] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.297100] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.307126] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.317152] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.327175] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.337201] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.347225] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.357250] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.367276] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.377302] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.387327] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.397351] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.407376] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.417401] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.026 [2024-06-11 13:59:53.427427] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.027 [2024-06-11 13:59:53.437453] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:19.027 [2024-06-11 13:59:53.447281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ce000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079cc000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ca000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c8000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c6000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c4000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c2000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c0000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079be000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079bc000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ba000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b8000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b6000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b4000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b2000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b0000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ae000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ac000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079aa000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a8000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a6000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a4000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a2000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a0000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799e000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799c000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799a000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007998000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007996000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007994000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007992000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007990000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798e000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798c000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798a000 len:0x1000 key:0x1bf0ef 00:33:19.027 [2024-06-11 13:59:53.447700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.027 [2024-06-11 13:59:53.447707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.447989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007958000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.447995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007956000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.448006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007954000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.448022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007952000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.448034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007950000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.448045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794e000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.448059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794c000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.448072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794a000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.448084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007948000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.448096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.448108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.448120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x1bf0ef 00:33:19.028 [2024-06-11 13:59:53.448131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.028 [2024-06-11 13:59:53.448138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.029 [2024-06-11 13:59:53.448515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1bf0ef 00:33:19.029 [2024-06-11 13:59:53.448520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.448715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.448721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.456345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.456371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.456382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.456389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.456394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.456400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.456405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.456412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.456417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.456424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:19.030 [2024-06-11 13:59:53.456429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32739 cdw0:8a39b2d0 sqhd:e540 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.468469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:19.030 [2024-06-11 13:59:53.468479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:19.030 [2024-06-11 13:59:53.468484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53432 len:8 PRP1 0x0 PRP2 0x0 00:33:19.030 [2024-06-11 13:59:53.468490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:19.030 [2024-06-11 13:59:53.468523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:33:19.030 [2024-06-11 13:59:53.468812] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:33:19.030 [2024-06-11 13:59:53.468821] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.030 [2024-06-11 13:59:53.468825] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:33:19.030 [2024-06-11 13:59:53.468836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.030 [2024-06-11 13:59:53.468842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:33:19.030 [2024-06-11 13:59:53.468850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:33:19.030 [2024-06-11 13:59:53.468855] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:33:19.030 [2024-06-11 13:59:53.468861] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:33:19.030 [2024-06-11 13:59:53.468874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.030 [2024-06-11 13:59:53.468879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:33:19.030 [2024-06-11 13:59:53.966326] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:19.030 [2024-06-11 13:59:54.471281] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:33:19.030 [2024-06-11 13:59:54.471295] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.030 [2024-06-11 13:59:54.471303] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:33:19.031 [2024-06-11 13:59:54.471314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.031 [2024-06-11 13:59:54.471320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:33:19.031 [2024-06-11 13:59:54.471327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:33:19.031 [2024-06-11 13:59:54.471332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:33:19.031 [2024-06-11 13:59:54.471337] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:33:19.031 [2024-06-11 13:59:54.471349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.031 [2024-06-11 13:59:54.471354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:33:19.031 [2024-06-11 13:59:55.474149] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:33:19.031 [2024-06-11 13:59:55.474174] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.031 [2024-06-11 13:59:55.474179] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:33:19.031 [2024-06-11 13:59:55.474191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.031 [2024-06-11 13:59:55.474196] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:33:19.031 [2024-06-11 13:59:55.474211] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:33:19.031 [2024-06-11 13:59:55.474217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:33:19.031 [2024-06-11 13:59:55.474222] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:33:19.031 [2024-06-11 13:59:55.474236] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.031 [2024-06-11 13:59:55.474241] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:33:19.031 [2024-06-11 13:59:56.477868] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:33:19.031 [2024-06-11 13:59:56.477900] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.031 [2024-06-11 13:59:56.477906] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:33:19.031 [2024-06-11 13:59:56.477920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.031 [2024-06-11 13:59:56.477926] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:33:19.031 [2024-06-11 13:59:56.477934] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:33:19.031 [2024-06-11 13:59:56.477939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:33:19.031 [2024-06-11 13:59:56.477945] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:33:19.031 [2024-06-11 13:59:56.477962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.031 [2024-06-11 13:59:56.477967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:33:19.031 [2024-06-11 13:59:58.483965] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.031 [2024-06-11 13:59:58.483998] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:33:19.031 [2024-06-11 13:59:58.484022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.031 [2024-06-11 13:59:58.484028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:33:19.031 [2024-06-11 13:59:58.484035] bdev_nvme.c:2884:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Reset is already in progress. Defer failover until reset completes. 00:33:19.031 [2024-06-11 13:59:58.484404] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:33:19.031 [2024-06-11 13:59:58.484413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:33:19.031 [2024-06-11 13:59:58.484418] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:33:19.031 [2024-06-11 13:59:58.484443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.031 [2024-06-11 13:59:58.484467] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:33:19.031 [2024-06-11 13:59:59.487854] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.031 [2024-06-11 13:59:59.487879] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:33:19.031 [2024-06-11 13:59:59.487896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.031 [2024-06-11 13:59:59.487902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:33:19.031 [2024-06-11 13:59:59.488681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:33:19.031 [2024-06-11 13:59:59.488690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:33:19.031 [2024-06-11 13:59:59.488696] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:33:19.031 [2024-06-11 13:59:59.488725] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.031 [2024-06-11 13:59:59.488731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:33:19.031 [2024-06-11 14:00:01.493940] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.031 [2024-06-11 14:00:01.493973] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:33:19.031 [2024-06-11 14:00:01.493991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.031 [2024-06-11 14:00:01.493997] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:33:19.031 [2024-06-11 14:00:01.494025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:33:19.031 [2024-06-11 14:00:01.494032] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:33:19.031 [2024-06-11 14:00:01.494037] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:33:19.031 [2024-06-11 14:00:01.494061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.031 [2024-06-11 14:00:01.494067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:33:19.031 [2024-06-11 14:00:03.499463] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:33:19.031 [2024-06-11 14:00:03.499490] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:33:19.031 [2024-06-11 14:00:03.499506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.031 [2024-06-11 14:00:03.499517] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:33:19.031 [2024-06-11 14:00:03.500900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:33:19.031 [2024-06-11 14:00:03.500910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:33:19.031 [2024-06-11 14:00:03.500916] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:33:19.031 [2024-06-11 14:00:03.500949] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:19.031 [2024-06-11 14:00:03.500955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:33:19.031 [2024-06-11 14:00:04.570890] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:19.031 00:33:19.031 Latency(us) 00:33:19.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.031 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:19.031 Verification LBA range: start 0x0 length 0x8000 00:33:19.031 Nvme_mlx_0_0n1 : 90.01 12931.47 50.51 0.00 0.00 9878.79 1925.12 14036937.39 00:33:19.031 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:19.031 Verification LBA range: start 0x0 length 0x8000 00:33:19.031 Nvme_mlx_0_1n1 : 90.01 8858.12 34.60 0.00 0.00 14432.50 2348.37 13086228.48 00:33:19.031 =================================================================================================================== 00:33:19.031 Total : 21789.58 85.12 0.00 0.00 11730.06 1925.12 14036937.39 00:33:19.031 Received shutdown signal, test time was about 90.000000 seconds 00:33:19.031 00:33:19.031 Latency(us) 00:33:19.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.031 =================================================================================================================== 00:33:19.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:19.031 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:33:19.031 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:33:19.031 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@202 -- # killprocess 2271946 00:33:19.031 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@949 -- # '[' -z 2271946 ']' 00:33:19.031 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # kill -0 2271946 00:33:19.031 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # uname 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2271946 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2271946' 00:33:19.032 killing process with pid 2271946 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@968 -- # kill 2271946 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@973 -- # wait 2271946 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@203 -- # nvmfpid= 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@205 -- # return 0 00:33:19.032 00:33:19.032 real 1m33.025s 00:33:19.032 user 4m20.487s 00:33:19.032 sys 0m5.260s 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:33:19.032 ************************************ 00:33:19.032 END TEST nvmf_device_removal_pci_remove 00:33:19.032 ************************************ 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@317 -- # nvmftestfini 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@117 -- # sync 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@120 -- # set +e 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:19.032 rmmod nvme_rdma 00:33:19.032 rmmod nvme_fabrics 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@124 -- # set -e 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@125 -- # return 0 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@318 -- # clean_bond_device 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # ip link 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # grep bond_nvmf 00:33:19.032 00:33:19.032 real 3m13.551s 00:33:19.032 user 8m43.068s 00:33:19.032 sys 0m15.893s 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:19.032 14:01:06 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:33:19.032 ************************************ 00:33:19.032 END TEST nvmf_device_removal 00:33:19.032 ************************************ 00:33:19.032 14:01:06 nvmf_rdma -- nvmf/nvmf.sh@80 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:33:19.032 14:01:06 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:19.032 14:01:06 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:19.032 14:01:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:19.032 ************************************ 00:33:19.032 START TEST nvmf_srq_overwhelm 00:33:19.032 ************************************ 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:33:19.032 * Looking for test storage... 00:33:19.032 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:33:19.032 14:01:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.010 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:33:21.011 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:33:21.011 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:33:21.011 Found net devices under 0000:98:00.0: mlx_0_0 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:33:21.011 Found net devices under 0000:98:00.1: mlx_0_1 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:21.011 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:21.273 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:33:21.274 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:21.274 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:33:21.274 altname enp152s0f0np0 00:33:21.274 altname ens817f0np0 00:33:21.274 inet 192.168.100.8/24 scope global mlx_0_0 00:33:21.274 valid_lft forever preferred_lft forever 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:33:21.274 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:21.274 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:33:21.274 altname enp152s0f1np1 00:33:21.274 altname ens817f1np1 00:33:21.274 inet 192.168.100.9/24 scope global mlx_0_1 00:33:21.274 valid_lft forever preferred_lft forever 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:33:21.274 14:01:13 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:33:21.274 192.168.100.9' 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:33:21.274 192.168.100.9' 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:33:21.274 192.168.100.9' 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=2294289 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 2294289 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@830 -- # '[' -z 2294289 ']' 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:21.274 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:21.274 [2024-06-11 14:01:14.161516] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:33:21.275 [2024-06-11 14:01:14.161584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.536 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.536 [2024-06-11 14:01:14.225047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:21.536 [2024-06-11 14:01:14.291641] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:21.536 [2024-06-11 14:01:14.291676] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:21.536 [2024-06-11 14:01:14.291683] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:21.536 [2024-06-11 14:01:14.291689] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:21.536 [2024-06-11 14:01:14.291695] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:21.536 [2024-06-11 14:01:14.291831] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.536 [2024-06-11 14:01:14.291951] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:21.536 [2024-06-11 14:01:14.292077] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.536 [2024-06-11 14:01:14.292077] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:33:22.107 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:22.107 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@863 -- # return 0 00:33:22.107 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:22.107 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:22.107 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:22.107 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:22.107 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:33:22.107 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:22.107 14:01:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:22.107 [2024-06-11 14:01:15.017650] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aa5e90/0x1aaa380) succeed. 00:33:22.369 [2024-06-11 14:01:15.032679] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aa74d0/0x1aeba10) succeed. 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:22.369 Malloc0 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:22.369 [2024-06-11 14:01:15.139996] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:22.369 14:01:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme0n1 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme0n1 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:23.754 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:24.014 Malloc1 00:33:24.014 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:24.014 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:24.014 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:24.014 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:24.014 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:24.014 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:24.014 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:24.014 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:24.014 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:24.014 14:01:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme1n1 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme1n1 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:25.401 Malloc2 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:25.401 14:01:18 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:33:26.788 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:33:26.788 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:33:26.788 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:33:26.788 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme2n1 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme2n1 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:27.049 Malloc3 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:27.049 14:01:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme3n1 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme3n1 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:28.436 Malloc4 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:28.436 14:01:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme4n1 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme4n1 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:30.352 Malloc5 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:30.352 14:01:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:33:31.760 14:01:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:33:31.760 14:01:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:33:31.760 14:01:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme5n1 00:33:31.760 14:01:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:33:31.760 14:01:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:33:31.760 14:01:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme5n1 00:33:31.760 14:01:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:33:31.760 14:01:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:33:31.760 [global] 00:33:31.760 thread=1 00:33:31.760 invalidate=1 00:33:31.760 rw=read 00:33:31.760 time_based=1 00:33:31.760 runtime=10 00:33:31.760 ioengine=libaio 00:33:31.761 direct=1 00:33:31.761 bs=1048576 00:33:31.761 iodepth=128 00:33:31.761 norandommap=1 00:33:31.761 numjobs=13 00:33:31.761 00:33:31.761 [job0] 00:33:31.761 filename=/dev/nvme0n1 00:33:31.761 [job1] 00:33:31.761 filename=/dev/nvme1n1 00:33:31.761 [job2] 00:33:31.761 filename=/dev/nvme2n1 00:33:31.761 [job3] 00:33:31.761 filename=/dev/nvme3n1 00:33:31.761 [job4] 00:33:31.761 filename=/dev/nvme4n1 00:33:31.761 [job5] 00:33:31.761 filename=/dev/nvme5n1 00:33:31.761 Could not set queue depth (nvme0n1) 00:33:31.761 Could not set queue depth (nvme1n1) 00:33:31.761 Could not set queue depth (nvme2n1) 00:33:31.761 Could not set queue depth (nvme3n1) 00:33:31.761 Could not set queue depth (nvme4n1) 00:33:31.761 Could not set queue depth (nvme5n1) 00:33:32.026 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:33:32.026 ... 00:33:32.026 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:33:32.026 ... 00:33:32.026 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:33:32.026 ... 00:33:32.026 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:33:32.026 ... 00:33:32.026 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:33:32.026 ... 00:33:32.026 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:33:32.026 ... 00:33:32.026 fio-3.35 00:33:32.026 Starting 78 threads 00:33:46.925 00:33:46.925 job0: (groupid=0, jobs=1): err= 0: pid=2296528: Tue Jun 11 14:01:38 2024 00:33:46.925 read: IOPS=4, BW=5118KiB/s (5241kB/s)(65.0MiB/13005msec) 00:33:46.925 slat (usec): min=600, max=4237.1k, avg=167509.65, stdev=677113.30 00:33:46.925 clat (msec): min=2115, max=13003, avg=12359.01, stdev=1879.70 00:33:46.925 lat (msec): min=6352, max=13003, avg=12526.52, stdev=1368.12 00:33:46.925 clat percentiles (msec): 00:33:46.925 | 1.00th=[ 2123], 5.00th=[ 8490], 10.00th=[12684], 20.00th=[12818], 00:33:46.925 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:33:46.925 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:33:46.925 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:33:46.925 | 99.99th=[12953] 00:33:46.925 lat (msec) : >=2000=100.00% 00:33:46.925 cpu : usr=0.01%, sys=0.78%, ctx=109, majf=0, minf=16641 00:33:46.925 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:33:46.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.925 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:33:46.925 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.925 job0: (groupid=0, jobs=1): err= 0: pid=2296530: Tue Jun 11 14:01:38 2024 00:33:46.925 read: IOPS=4, BW=5099KiB/s (5222kB/s)(64.0MiB/12852msec) 00:33:46.925 slat (usec): min=663, max=2110.8k, avg=167646.21, stdev=559991.15 00:33:46.925 clat (msec): min=2121, max=12849, avg=9880.93, stdev=3327.13 00:33:46.925 lat (msec): min=4201, max=12851, avg=10048.58, stdev=3197.77 00:33:46.925 clat percentiles (msec): 00:33:46.925 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:33:46.925 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[12818], 00:33:46.925 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:33:46.925 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:33:46.925 | 99.99th=[12818] 00:33:46.925 lat (msec) : >=2000=100.00% 00:33:46.925 cpu : usr=0.00%, sys=0.61%, ctx=77, majf=0, minf=16385 00:33:46.925 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:33:46.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.925 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:33:46.925 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.925 job0: (groupid=0, jobs=1): err= 0: pid=2296531: Tue Jun 11 14:01:38 2024 00:33:46.925 read: IOPS=2, BW=2211KiB/s (2264kB/s)(23.0MiB/10651msec) 00:33:46.925 slat (msec): min=4, max=2131, avg=460.71, stdev=876.63 00:33:46.925 clat (msec): min=53, max=10643, avg=4959.20, stdev=2755.58 00:33:46.925 lat (msec): min=2132, max=10650, avg=5419.91, stdev=2783.81 00:33:46.925 clat percentiles (msec): 00:33:46.925 | 1.00th=[ 54], 5.00th=[ 2140], 10.00th=[ 2140], 20.00th=[ 2165], 00:33:46.925 | 30.00th=[ 2198], 40.00th=[ 4329], 50.00th=[ 4329], 60.00th=[ 6409], 00:33:46.925 | 70.00th=[ 6477], 80.00th=[ 6477], 90.00th=[ 8658], 95.00th=[10671], 00:33:46.925 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:33:46.925 | 99.99th=[10671] 00:33:46.925 lat (msec) : 100=4.35%, >=2000=95.65% 00:33:46.925 cpu : usr=0.00%, sys=0.15%, ctx=55, majf=0, minf=5889 00:33:46.925 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:33:46.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.925 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:33:46.925 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.925 job0: (groupid=0, jobs=1): err= 0: pid=2296532: Tue Jun 11 14:01:38 2024 00:33:46.925 read: IOPS=103, BW=103MiB/s (108MB/s)(1324MiB/12817msec) 00:33:46.925 slat (usec): min=31, max=2051.9k, avg=8071.66, stdev=79218.09 00:33:46.925 clat (msec): min=329, max=4936, avg=1198.16, stdev=1262.99 00:33:46.925 lat (msec): min=333, max=4936, avg=1206.23, stdev=1266.47 00:33:46.926 clat percentiles (msec): 00:33:46.926 | 1.00th=[ 338], 5.00th=[ 355], 10.00th=[ 393], 20.00th=[ 430], 00:33:46.926 | 30.00th=[ 456], 40.00th=[ 735], 50.00th=[ 743], 60.00th=[ 810], 00:33:46.926 | 70.00th=[ 844], 80.00th=[ 911], 90.00th=[ 2769], 95.00th=[ 4530], 00:33:46.926 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:33:46.926 | 99.99th=[ 4933] 00:33:46.926 bw ( KiB/s): min= 2052, max=325632, per=4.37%, avg=163392.47, stdev=88971.87, samples=15 00:33:46.926 iops : min= 2, max= 318, avg=159.53, stdev=86.84, samples=15 00:33:46.926 lat (msec) : 500=35.88%, 750=14.65%, 1000=30.21%, >=2000=19.26% 00:33:46.926 cpu : usr=0.07%, sys=1.64%, ctx=1233, majf=0, minf=32769 00:33:46.926 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:33:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.926 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.926 issued rwts: total=1324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.926 job0: (groupid=0, jobs=1): err= 0: pid=2296533: Tue Jun 11 14:01:38 2024 00:33:46.926 read: IOPS=0, BW=882KiB/s (903kB/s)(11.0MiB/12769msec) 00:33:46.926 slat (msec): min=2, max=4313, avg=967.95, stdev=1458.98 00:33:46.926 clat (msec): min=2120, max=12765, avg=9470.87, stdev=4082.09 00:33:46.926 lat (msec): min=4209, max=12768, avg=10438.82, stdev=3364.13 00:33:46.926 clat percentiles (msec): 00:33:46.926 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 4245], 00:33:46.926 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12684], 00:33:46.926 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:33:46.926 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:33:46.926 | 99.99th=[12818] 00:33:46.926 lat (msec) : >=2000=100.00% 00:33:46.926 cpu : usr=0.00%, sys=0.05%, ctx=46, majf=0, minf=2817 00:33:46.926 IO depths : 1=9.1%, 2=18.2%, 4=36.4%, 8=36.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.926 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.926 issued rwts: total=11,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.926 job0: (groupid=0, jobs=1): err= 0: pid=2296534: Tue Jun 11 14:01:38 2024 00:33:46.926 read: IOPS=2, BW=2225KiB/s (2278kB/s)(28.0MiB/12889msec) 00:33:46.926 slat (msec): min=2, max=4252, avg=384.92, stdev=995.73 00:33:46.926 clat (msec): min=2110, max=12883, avg=10745.87, stdev=3587.14 00:33:46.926 lat (msec): min=4209, max=12888, avg=11130.79, stdev=3181.54 00:33:46.926 clat percentiles (msec): 00:33:46.926 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 6275], 00:33:46.926 | 30.00th=[12684], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:33:46.926 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:33:46.926 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:33:46.926 | 99.99th=[12818] 00:33:46.926 lat (msec) : >=2000=100.00% 00:33:46.926 cpu : usr=0.01%, sys=0.38%, ctx=88, majf=0, minf=7169 00:33:46.926 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:33:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.926 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:33:46.926 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.926 job0: (groupid=0, jobs=1): err= 0: pid=2296535: Tue Jun 11 14:01:38 2024 00:33:46.926 read: IOPS=7, BW=7962KiB/s (8154kB/s)(101MiB/12989msec) 00:33:46.926 slat (usec): min=411, max=2144.4k, avg=107744.84, stdev=448511.71 00:33:46.926 clat (msec): min=2106, max=12988, avg=11871.24, stdev=2341.21 00:33:46.926 lat (msec): min=4173, max=12988, avg=11978.99, stdev=2128.02 00:33:46.926 clat percentiles (msec): 00:33:46.926 | 1.00th=[ 4178], 5.00th=[ 4279], 10.00th=[10671], 20.00th=[12550], 00:33:46.926 | 30.00th=[12684], 40.00th=[12684], 50.00th=[12684], 60.00th=[12818], 00:33:46.926 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:33:46.926 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:33:46.926 | 99.99th=[12953] 00:33:46.926 lat (msec) : >=2000=100.00% 00:33:46.926 cpu : usr=0.00%, sys=1.03%, ctx=130, majf=0, minf=25857 00:33:46.926 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=7.9%, 16=15.8%, 32=31.7%, >=64=37.6% 00:33:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.926 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:33:46.926 issued rwts: total=101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.926 job0: (groupid=0, jobs=1): err= 0: pid=2296536: Tue Jun 11 14:01:38 2024 00:33:46.926 read: IOPS=2, BW=2230KiB/s (2284kB/s)(28.0MiB/12856msec) 00:33:46.926 slat (usec): min=663, max=4306.2k, avg=383386.13, stdev=1162037.58 00:33:46.926 clat (msec): min=2121, max=12849, avg=12034.34, stdev=2343.98 00:33:46.926 lat (msec): min=6352, max=12855, avg=12417.73, stdev=1314.16 00:33:46.926 clat percentiles (msec): 00:33:46.926 | 1.00th=[ 2123], 5.00th=[ 6342], 10.00th=[10671], 20.00th=[12684], 00:33:46.926 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:33:46.926 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:33:46.926 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:33:46.926 | 99.99th=[12818] 00:33:46.926 lat (msec) : >=2000=100.00% 00:33:46.926 cpu : usr=0.01%, sys=0.36%, ctx=73, majf=0, minf=7169 00:33:46.926 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:33:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.926 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:33:46.926 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.926 job0: (groupid=0, jobs=1): err= 0: pid=2296538: Tue Jun 11 14:01:38 2024 00:33:46.926 read: IOPS=22, BW=22.4MiB/s (23.5MB/s)(240MiB/10692msec) 00:33:46.926 slat (usec): min=297, max=2104.4k, avg=44330.54, stdev=266435.86 00:33:46.926 clat (msec): min=50, max=9649, avg=5334.01, stdev=3773.88 00:33:46.926 lat (msec): min=1124, max=9673, avg=5378.34, stdev=3764.60 00:33:46.926 clat percentiles (msec): 00:33:46.926 | 1.00th=[ 1116], 5.00th=[ 1133], 10.00th=[ 1150], 20.00th=[ 1167], 00:33:46.926 | 30.00th=[ 1200], 40.00th=[ 1284], 50.00th=[ 6477], 60.00th=[ 8792], 00:33:46.926 | 70.00th=[ 8926], 80.00th=[ 9194], 90.00th=[ 9329], 95.00th=[ 9463], 00:33:46.926 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:33:46.926 | 99.99th=[ 9597] 00:33:46.926 bw ( KiB/s): min= 4087, max=116736, per=0.88%, avg=32766.71, stdev=43057.73, samples=7 00:33:46.926 iops : min= 3, max= 114, avg=31.86, stdev=42.16, samples=7 00:33:46.926 lat (msec) : 100=0.42%, 2000=40.00%, >=2000=59.58% 00:33:46.926 cpu : usr=0.01%, sys=0.83%, ctx=545, majf=0, minf=32769 00:33:46.926 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.7%, 32=13.3%, >=64=73.8% 00:33:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.926 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:33:46.926 issued rwts: total=240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.926 job0: (groupid=0, jobs=1): err= 0: pid=2296539: Tue Jun 11 14:01:38 2024 00:33:46.926 read: IOPS=3, BW=3224KiB/s (3302kB/s)(34.0MiB/10798msec) 00:33:46.926 slat (usec): min=436, max=2150.2k, avg=315858.54, stdev=750969.62 00:33:46.926 clat (msec): min=57, max=10796, avg=8586.68, stdev=3412.54 00:33:46.926 lat (msec): min=2138, max=10796, avg=8902.53, stdev=3079.99 00:33:46.926 clat percentiles (msec): 00:33:46.926 | 1.00th=[ 58], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:33:46.926 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:33:46.926 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:33:46.926 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:33:46.926 | 99.99th=[10805] 00:33:46.926 lat (msec) : 100=2.94%, >=2000=97.06% 00:33:46.926 cpu : usr=0.01%, sys=0.50%, ctx=79, majf=0, minf=8705 00:33:46.926 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:33:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.926 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:33:46.926 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.926 job0: (groupid=0, jobs=1): err= 0: pid=2296540: Tue Jun 11 14:01:38 2024 00:33:46.926 read: IOPS=1, BW=1916KiB/s (1962kB/s)(24.0MiB/12827msec) 00:33:46.926 slat (usec): min=715, max=4237.1k, avg=446074.08, stdev=1070319.93 00:33:46.926 clat (msec): min=2120, max=12826, avg=10381.71, stdev=3287.71 00:33:46.926 lat (msec): min=6358, max=12826, avg=10827.78, stdev=2809.69 00:33:46.926 clat percentiles (msec): 00:33:46.926 | 1.00th=[ 2123], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[ 6409], 00:33:46.926 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12684], 60.00th=[12818], 00:33:46.926 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:33:46.926 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:33:46.926 | 99.99th=[12818] 00:33:46.926 lat (msec) : >=2000=100.00% 00:33:46.926 cpu : usr=0.00%, sys=0.24%, ctx=64, majf=0, minf=6145 00:33:46.926 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:33:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.926 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:33:46.926 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.926 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.926 job0: (groupid=0, jobs=1): err= 0: pid=2296541: Tue Jun 11 14:01:38 2024 00:33:46.926 read: IOPS=31, BW=31.5MiB/s (33.1MB/s)(344MiB/10904msec) 00:33:46.926 slat (usec): min=459, max=2104.4k, avg=31525.62, stdev=223228.86 00:33:46.926 clat (msec): min=56, max=9326, avg=3863.81, stdev=3728.57 00:33:46.926 lat (msec): min=832, max=9342, avg=3895.33, stdev=3731.24 00:33:46.926 clat percentiles (msec): 00:33:46.926 | 1.00th=[ 835], 5.00th=[ 860], 10.00th=[ 860], 20.00th=[ 877], 00:33:46.926 | 30.00th=[ 902], 40.00th=[ 936], 50.00th=[ 1011], 60.00th=[ 2198], 00:33:46.926 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9194], 00:33:46.926 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:33:46.926 | 99.99th=[ 9329] 00:33:46.926 bw ( KiB/s): min=10240, max=155648, per=1.69%, avg=63198.29, stdev=67523.73, samples=7 00:33:46.927 iops : min= 10, max= 152, avg=61.71, stdev=65.94, samples=7 00:33:46.927 lat (msec) : 100=0.29%, 1000=47.38%, 2000=11.05%, >=2000=41.28% 00:33:46.927 cpu : usr=0.03%, sys=1.43%, ctx=589, majf=0, minf=32769 00:33:46.927 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.3%, >=64=81.7% 00:33:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.927 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:33:46.927 issued rwts: total=344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.927 job0: (groupid=0, jobs=1): err= 0: pid=2296542: Tue Jun 11 14:01:38 2024 00:33:46.927 read: IOPS=1, BW=1850KiB/s (1895kB/s)(23.0MiB/12728msec) 00:33:46.927 slat (msec): min=2, max=4241, avg=461.60, stdev=1083.21 00:33:46.927 clat (msec): min=2110, max=12719, avg=11190.16, stdev=2852.70 00:33:46.927 lat (msec): min=6352, max=12727, avg=11651.75, stdev=2067.71 00:33:46.927 clat percentiles (msec): 00:33:46.927 | 1.00th=[ 2106], 5.00th=[ 6342], 10.00th=[ 6409], 20.00th=[ 8557], 00:33:46.927 | 30.00th=[12550], 40.00th=[12550], 50.00th=[12550], 60.00th=[12684], 00:33:46.927 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:33:46.927 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:33:46.927 | 99.99th=[12684] 00:33:46.927 lat (msec) : >=2000=100.00% 00:33:46.927 cpu : usr=0.00%, sys=0.12%, ctx=83, majf=0, minf=5889 00:33:46.927 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:33:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.927 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:33:46.927 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.927 job1: (groupid=0, jobs=1): err= 0: pid=2296560: Tue Jun 11 14:01:38 2024 00:33:46.927 read: IOPS=2, BW=2159KiB/s (2211kB/s)(27.0MiB/12807msec) 00:33:46.927 slat (usec): min=722, max=4223.0k, avg=395473.90, stdev=1004465.06 00:33:46.927 clat (msec): min=2128, max=12805, avg=11228.42, stdev=2799.65 00:33:46.927 lat (msec): min=6351, max=12806, avg=11623.89, stdev=2141.55 00:33:46.927 clat percentiles (msec): 00:33:46.927 | 1.00th=[ 2123], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[10671], 00:33:46.927 | 30.00th=[12550], 40.00th=[12550], 50.00th=[12550], 60.00th=[12684], 00:33:46.927 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:33:46.927 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:33:46.927 | 99.99th=[12818] 00:33:46.927 lat (msec) : >=2000=100.00% 00:33:46.927 cpu : usr=0.00%, sys=0.26%, ctx=86, majf=0, minf=6913 00:33:46.927 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:33:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.927 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:33:46.927 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.927 job1: (groupid=0, jobs=1): err= 0: pid=2296561: Tue Jun 11 14:01:38 2024 00:33:46.927 read: IOPS=142, BW=142MiB/s (149MB/s)(1432MiB/10053msec) 00:33:46.927 slat (usec): min=27, max=2069.7k, avg=6988.21, stdev=56534.91 00:33:46.927 clat (msec): min=38, max=3260, avg=755.09, stdev=539.89 00:33:46.927 lat (msec): min=59, max=4838, avg=762.08, stdev=548.64 00:33:46.927 clat percentiles (msec): 00:33:46.927 | 1.00th=[ 75], 5.00th=[ 222], 10.00th=[ 330], 20.00th=[ 477], 00:33:46.927 | 30.00th=[ 584], 40.00th=[ 667], 50.00th=[ 735], 60.00th=[ 743], 00:33:46.927 | 70.00th=[ 802], 80.00th=[ 835], 90.00th=[ 911], 95.00th=[ 1099], 00:33:46.927 | 99.00th=[ 3239], 99.50th=[ 3239], 99.90th=[ 3272], 99.95th=[ 3272], 00:33:46.927 | 99.99th=[ 3272] 00:33:46.927 bw ( KiB/s): min=110592, max=370382, per=5.08%, avg=189953.86, stdev=62313.91, samples=14 00:33:46.927 iops : min= 108, max= 361, avg=185.36, stdev=60.73, samples=14 00:33:46.927 lat (msec) : 50=0.07%, 100=1.19%, 250=4.26%, 500=16.27%, 750=39.59% 00:33:46.927 lat (msec) : 1000=30.52%, 2000=4.19%, >=2000=3.91% 00:33:46.927 cpu : usr=0.05%, sys=2.12%, ctx=1464, majf=0, minf=32769 00:33:46.927 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:33:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.927 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.927 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.927 job1: (groupid=0, jobs=1): err= 0: pid=2296562: Tue Jun 11 14:01:38 2024 00:33:46.927 read: IOPS=67, BW=67.2MiB/s (70.5MB/s)(869MiB/12923msec) 00:33:46.927 slat (usec): min=32, max=2091.8k, avg=12419.87, stdev=73155.98 00:33:46.927 clat (msec): min=421, max=8435, avg=1803.17, stdev=2133.39 00:33:46.927 lat (msec): min=422, max=8438, avg=1815.59, stdev=2143.93 00:33:46.927 clat percentiles (msec): 00:33:46.927 | 1.00th=[ 422], 5.00th=[ 426], 10.00th=[ 451], 20.00th=[ 498], 00:33:46.927 | 30.00th=[ 617], 40.00th=[ 726], 50.00th=[ 751], 60.00th=[ 869], 00:33:46.927 | 70.00th=[ 1116], 80.00th=[ 2903], 90.00th=[ 5805], 95.00th=[ 7215], 00:33:46.927 | 99.00th=[ 8288], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:33:46.927 | 99.99th=[ 8423] 00:33:46.927 bw ( KiB/s): min= 2052, max=290816, per=2.39%, avg=89378.29, stdev=88827.63, samples=17 00:33:46.927 iops : min= 2, max= 284, avg=87.24, stdev=86.74, samples=17 00:33:46.927 lat (msec) : 500=20.14%, 750=29.80%, 1000=15.65%, 2000=8.63%, >=2000=25.78% 00:33:46.927 cpu : usr=0.03%, sys=1.17%, ctx=1352, majf=0, minf=32769 00:33:46.927 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.8% 00:33:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.927 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.927 issued rwts: total=869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.927 job1: (groupid=0, jobs=1): err= 0: pid=2296563: Tue Jun 11 14:01:38 2024 00:33:46.927 read: IOPS=84, BW=84.2MiB/s (88.2MB/s)(851MiB/10112msec) 00:33:46.927 slat (usec): min=29, max=140069, avg=11751.83, stdev=17433.07 00:33:46.927 clat (msec): min=106, max=3542, avg=1422.41, stdev=937.20 00:33:46.927 lat (msec): min=112, max=3566, avg=1434.17, stdev=942.58 00:33:46.927 clat percentiles (msec): 00:33:46.927 | 1.00th=[ 180], 5.00th=[ 435], 10.00th=[ 550], 20.00th=[ 609], 00:33:46.927 | 30.00th=[ 852], 40.00th=[ 961], 50.00th=[ 1062], 60.00th=[ 1250], 00:33:46.927 | 70.00th=[ 1687], 80.00th=[ 2299], 90.00th=[ 3171], 95.00th=[ 3406], 00:33:46.927 | 99.00th=[ 3507], 99.50th=[ 3540], 99.90th=[ 3540], 99.95th=[ 3540], 00:33:46.927 | 99.99th=[ 3540] 00:33:46.927 bw ( KiB/s): min=18432, max=231424, per=2.20%, avg=82173.67, stdev=56899.84, samples=18 00:33:46.927 iops : min= 18, max= 226, avg=80.22, stdev=55.54, samples=18 00:33:46.927 lat (msec) : 250=2.00%, 500=4.35%, 750=19.27%, 1000=19.62%, 2000=30.90% 00:33:46.927 lat (msec) : >=2000=23.85% 00:33:46.927 cpu : usr=0.07%, sys=1.66%, ctx=1445, majf=0, minf=32769 00:33:46.927 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:33:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.927 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.927 issued rwts: total=851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.927 job1: (groupid=0, jobs=1): err= 0: pid=2296564: Tue Jun 11 14:01:38 2024 00:33:46.927 read: IOPS=3, BW=3955KiB/s (4050kB/s)(42.0MiB/10874msec) 00:33:46.927 slat (usec): min=690, max=2125.4k, avg=257429.62, stdev=679528.91 00:33:46.927 clat (msec): min=61, max=10872, avg=9039.70, stdev=3130.85 00:33:46.927 lat (msec): min=2132, max=10873, avg=9297.13, stdev=2801.82 00:33:46.927 clat percentiles (msec): 00:33:46.927 | 1.00th=[ 62], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 6477], 00:33:46.927 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[10671], 60.00th=[10805], 00:33:46.927 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:33:46.927 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:33:46.927 | 99.99th=[10939] 00:33:46.927 lat (msec) : 100=2.38%, >=2000=97.62% 00:33:46.927 cpu : usr=0.00%, sys=0.52%, ctx=99, majf=0, minf=10753 00:33:46.927 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:33:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.927 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:33:46.927 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.927 job1: (groupid=0, jobs=1): err= 0: pid=2296565: Tue Jun 11 14:01:38 2024 00:33:46.927 read: IOPS=7, BW=7574KiB/s (7756kB/s)(96.0MiB/12979msec) 00:33:46.927 slat (usec): min=662, max=2113.3k, avg=113060.85, stdev=463062.08 00:33:46.927 clat (msec): min=2124, max=12977, avg=11249.75, stdev=2766.57 00:33:46.927 lat (msec): min=4196, max=12978, avg=11362.81, stdev=2606.89 00:33:46.927 clat percentiles (msec): 00:33:46.927 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[ 8557], 00:33:46.927 | 30.00th=[10671], 40.00th=[12684], 50.00th=[12818], 60.00th=[12818], 00:33:46.927 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:33:46.927 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:33:46.927 | 99.99th=[12953] 00:33:46.927 lat (msec) : >=2000=100.00% 00:33:46.927 cpu : usr=0.00%, sys=1.04%, ctx=113, majf=0, minf=24577 00:33:46.927 IO depths : 1=1.0%, 2=2.1%, 4=4.2%, 8=8.3%, 16=16.7%, 32=33.3%, >=64=34.4% 00:33:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.927 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:33:46.927 issued rwts: total=96,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.927 job1: (groupid=0, jobs=1): err= 0: pid=2296566: Tue Jun 11 14:01:38 2024 00:33:46.927 read: IOPS=1, BW=1530KiB/s (1567kB/s)(19.0MiB/12713msec) 00:33:46.927 slat (msec): min=5, max=2124, avg=557.49, stdev=938.15 00:33:46.927 clat (msec): min=2120, max=12674, avg=7502.16, stdev=2963.53 00:33:46.928 lat (msec): min=4204, max=12712, avg=8059.65, stdev=2890.31 00:33:46.928 clat percentiles (msec): 00:33:46.928 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 4245], 00:33:46.928 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 8557], 00:33:46.928 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[12684], 00:33:46.928 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:33:46.928 | 99.99th=[12684] 00:33:46.928 lat (msec) : >=2000=100.00% 00:33:46.928 cpu : usr=0.00%, sys=0.08%, ctx=49, majf=0, minf=4865 00:33:46.928 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:33:46.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.928 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:33:46.928 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.928 job1: (groupid=0, jobs=1): err= 0: pid=2296568: Tue Jun 11 14:01:38 2024 00:33:46.928 read: IOPS=17, BW=17.3MiB/s (18.1MB/s)(224MiB/12975msec) 00:33:46.928 slat (usec): min=139, max=2109.1k, avg=48481.86, stdev=251621.41 00:33:46.928 clat (msec): min=2113, max=12751, avg=6546.08, stdev=2276.93 00:33:46.928 lat (msec): min=3056, max=12752, avg=6594.57, stdev=2289.26 00:33:46.928 clat percentiles (msec): 00:33:46.928 | 1.00th=[ 3071], 5.00th=[ 3138], 10.00th=[ 4396], 20.00th=[ 5000], 00:33:46.928 | 30.00th=[ 5336], 40.00th=[ 5604], 50.00th=[ 5873], 60.00th=[ 6409], 00:33:46.928 | 70.00th=[ 6946], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10134], 00:33:46.928 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12818], 99.95th=[12818], 00:33:46.928 | 99.99th=[12818] 00:33:46.928 bw ( KiB/s): min= 2052, max=43008, per=0.66%, avg=24829.88, stdev=16547.35, samples=8 00:33:46.928 iops : min= 2, max= 42, avg=24.12, stdev=16.29, samples=8 00:33:46.928 lat (msec) : >=2000=100.00% 00:33:46.928 cpu : usr=0.00%, sys=1.03%, ctx=487, majf=0, minf=32769 00:33:46.928 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.1%, 32=14.3%, >=64=71.9% 00:33:46.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.928 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:33:46.928 issued rwts: total=224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.928 job1: (groupid=0, jobs=1): err= 0: pid=2296569: Tue Jun 11 14:01:38 2024 00:33:46.928 read: IOPS=1, BW=1343KiB/s (1375kB/s)(14.0MiB/10674msec) 00:33:46.928 slat (msec): min=4, max=2129, avg=758.18, stdev=1035.07 00:33:46.928 clat (msec): min=58, max=10651, avg=6598.52, stdev=3282.76 00:33:46.928 lat (msec): min=2150, max=10673, avg=7356.70, stdev=2853.84 00:33:46.928 clat percentiles (msec): 00:33:46.928 | 1.00th=[ 59], 5.00th=[ 59], 10.00th=[ 2165], 20.00th=[ 2198], 00:33:46.928 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8658], 00:33:46.928 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10671], 95.00th=[10671], 00:33:46.928 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:33:46.928 | 99.99th=[10671] 00:33:46.928 lat (msec) : 100=7.14%, >=2000=92.86% 00:33:46.928 cpu : usr=0.00%, sys=0.11%, ctx=54, majf=0, minf=3585 00:33:46.928 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.928 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.928 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.928 job1: (groupid=0, jobs=1): err= 0: pid=2296570: Tue Jun 11 14:01:38 2024 00:33:46.928 read: IOPS=19, BW=19.6MiB/s (20.6MB/s)(210MiB/10711msec) 00:33:46.928 slat (usec): min=60, max=2132.7k, avg=50716.80, stdev=246377.92 00:33:46.928 clat (msec): min=58, max=7103, avg=4574.89, stdev=1814.64 00:33:46.928 lat (msec): min=2157, max=7140, avg=4625.61, stdev=1781.95 00:33:46.928 clat percentiles (msec): 00:33:46.928 | 1.00th=[ 2198], 5.00th=[ 2265], 10.00th=[ 2299], 20.00th=[ 2400], 00:33:46.928 | 30.00th=[ 2467], 40.00th=[ 4463], 50.00th=[ 5201], 60.00th=[ 5604], 00:33:46.928 | 70.00th=[ 5873], 80.00th=[ 6275], 90.00th=[ 6678], 95.00th=[ 6946], 00:33:46.928 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:33:46.928 | 99.99th=[ 7080] 00:33:46.928 bw ( KiB/s): min= 2048, max=71680, per=0.90%, avg=33587.20, stdev=31581.73, samples=5 00:33:46.928 iops : min= 2, max= 70, avg=32.80, stdev=30.84, samples=5 00:33:46.928 lat (msec) : 100=0.48%, >=2000=99.52% 00:33:46.928 cpu : usr=0.07%, sys=0.87%, ctx=480, majf=0, minf=32769 00:33:46.928 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.8%, 16=7.6%, 32=15.2%, >=64=70.0% 00:33:46.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.928 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:33:46.928 issued rwts: total=210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.928 job1: (groupid=0, jobs=1): err= 0: pid=2296571: Tue Jun 11 14:01:38 2024 00:33:46.928 read: IOPS=4, BW=4170KiB/s (4270kB/s)(44.0MiB/10804msec) 00:33:46.928 slat (usec): min=1802, max=2131.3k, avg=244208.80, stdev=669872.21 00:33:46.928 clat (msec): min=57, max=10800, avg=8400.23, stdev=3356.04 00:33:46.928 lat (msec): min=2149, max=10803, avg=8644.44, stdev=3117.33 00:33:46.928 clat percentiles (msec): 00:33:46.928 | 1.00th=[ 58], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 4329], 00:33:46.928 | 30.00th=[ 6477], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:33:46.928 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:33:46.928 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:33:46.928 | 99.99th=[10805] 00:33:46.928 lat (msec) : 100=2.27%, >=2000=97.73% 00:33:46.928 cpu : usr=0.00%, sys=0.66%, ctx=85, majf=0, minf=11265 00:33:46.928 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:33:46.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.928 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:33:46.928 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.928 job1: (groupid=0, jobs=1): err= 0: pid=2296572: Tue Jun 11 14:01:38 2024 00:33:46.928 read: IOPS=23, BW=23.2MiB/s (24.3MB/s)(250MiB/10771msec) 00:33:46.928 slat (usec): min=402, max=2134.2k, avg=42846.48, stdev=183481.77 00:33:46.928 clat (msec): min=57, max=5723, avg=3872.56, stdev=952.43 00:33:46.928 lat (msec): min=2191, max=5750, avg=3915.40, stdev=928.50 00:33:46.928 clat percentiles (msec): 00:33:46.928 | 1.00th=[ 2198], 5.00th=[ 2567], 10.00th=[ 2903], 20.00th=[ 3272], 00:33:46.928 | 30.00th=[ 3406], 40.00th=[ 3473], 50.00th=[ 3574], 60.00th=[ 3641], 00:33:46.928 | 70.00th=[ 4329], 80.00th=[ 5067], 90.00th=[ 5336], 95.00th=[ 5537], 00:33:46.928 | 99.00th=[ 5671], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:33:46.928 | 99.99th=[ 5738] 00:33:46.928 bw ( KiB/s): min= 8192, max=51200, per=0.84%, avg=31232.00, stdev=14512.55, samples=8 00:33:46.928 iops : min= 8, max= 50, avg=30.50, stdev=14.17, samples=8 00:33:46.928 lat (msec) : 100=0.40%, >=2000=99.60% 00:33:46.928 cpu : usr=0.06%, sys=1.23%, ctx=815, majf=0, minf=32769 00:33:46.928 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.8%, >=64=74.8% 00:33:46.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.928 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:33:46.928 issued rwts: total=250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.928 job1: (groupid=0, jobs=1): err= 0: pid=2296573: Tue Jun 11 14:01:38 2024 00:33:46.928 read: IOPS=18, BW=18.5MiB/s (19.4MB/s)(236MiB/12737msec) 00:33:46.928 slat (usec): min=91, max=2099.7k, avg=44944.37, stdev=229561.32 00:33:46.928 clat (msec): min=1459, max=7676, avg=4827.94, stdev=2272.02 00:33:46.928 lat (msec): min=1466, max=7689, avg=4872.89, stdev=2257.63 00:33:46.928 clat percentiles (msec): 00:33:46.928 | 1.00th=[ 1469], 5.00th=[ 1603], 10.00th=[ 1737], 20.00th=[ 2265], 00:33:46.928 | 30.00th=[ 2802], 40.00th=[ 3339], 50.00th=[ 6342], 60.00th=[ 6611], 00:33:46.928 | 70.00th=[ 6879], 80.00th=[ 7013], 90.00th=[ 7282], 95.00th=[ 7416], 00:33:46.928 | 99.00th=[ 7684], 99.50th=[ 7684], 99.90th=[ 7684], 99.95th=[ 7684], 00:33:46.928 | 99.99th=[ 7684] 00:33:46.928 bw ( KiB/s): min= 2052, max=114688, per=0.99%, avg=37206.00, stdev=49277.84, samples=6 00:33:46.928 iops : min= 2, max= 112, avg=36.33, stdev=48.12, samples=6 00:33:46.928 lat (msec) : 2000=14.83%, >=2000=85.17% 00:33:46.928 cpu : usr=0.04%, sys=0.94%, ctx=510, majf=0, minf=32769 00:33:46.928 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.6%, >=64=73.3% 00:33:46.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.928 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:33:46.928 issued rwts: total=236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.928 job2: (groupid=0, jobs=1): err= 0: pid=2296584: Tue Jun 11 14:01:38 2024 00:33:46.928 read: IOPS=37, BW=37.4MiB/s (39.2MB/s)(379MiB/10143msec) 00:33:46.928 slat (usec): min=57, max=1996.2k, avg=26503.45, stdev=105217.12 00:33:46.928 clat (msec): min=95, max=5936, avg=2549.50, stdev=1303.24 00:33:46.928 lat (msec): min=148, max=5950, avg=2576.01, stdev=1310.63 00:33:46.928 clat percentiles (msec): 00:33:46.928 | 1.00th=[ 150], 5.00th=[ 330], 10.00th=[ 1133], 20.00th=[ 1804], 00:33:46.928 | 30.00th=[ 1938], 40.00th=[ 2022], 50.00th=[ 2299], 60.00th=[ 2500], 00:33:46.928 | 70.00th=[ 2937], 80.00th=[ 3440], 90.00th=[ 4732], 95.00th=[ 5805], 00:33:46.928 | 99.00th=[ 5873], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:33:46.928 | 99.99th=[ 5940] 00:33:46.928 bw ( KiB/s): min=18432, max=77824, per=1.25%, avg=46679.91, stdev=22365.22, samples=11 00:33:46.928 iops : min= 18, max= 76, avg=45.36, stdev=21.69, samples=11 00:33:46.928 lat (msec) : 100=0.26%, 250=2.11%, 500=3.17%, 750=2.37%, 1000=1.06% 00:33:46.928 lat (msec) : 2000=28.50%, >=2000=62.53% 00:33:46.928 cpu : usr=0.01%, sys=1.74%, ctx=1010, majf=0, minf=32769 00:33:46.928 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.4% 00:33:46.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.928 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:33:46.928 issued rwts: total=379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.929 job2: (groupid=0, jobs=1): err= 0: pid=2296585: Tue Jun 11 14:01:38 2024 00:33:46.929 read: IOPS=98, BW=98.6MiB/s (103MB/s)(1068MiB/10829msec) 00:33:46.929 slat (usec): min=28, max=2153.4k, avg=10076.50, stdev=99873.05 00:33:46.929 clat (msec): min=60, max=6807, avg=1252.94, stdev=1896.26 00:33:46.929 lat (msec): min=392, max=6808, avg=1263.02, stdev=1902.34 00:33:46.929 clat percentiles (msec): 00:33:46.929 | 1.00th=[ 393], 5.00th=[ 393], 10.00th=[ 393], 20.00th=[ 397], 00:33:46.929 | 30.00th=[ 401], 40.00th=[ 409], 50.00th=[ 498], 60.00th=[ 510], 00:33:46.929 | 70.00th=[ 558], 80.00th=[ 1133], 90.00th=[ 5336], 95.00th=[ 6611], 00:33:46.929 | 99.00th=[ 6745], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:33:46.929 | 99.99th=[ 6812] 00:33:46.929 bw ( KiB/s): min= 2043, max=325632, per=3.96%, avg=147954.15, stdev=133593.32, samples=13 00:33:46.929 iops : min= 1, max= 318, avg=144.23, stdev=130.69, samples=13 00:33:46.929 lat (msec) : 100=0.09%, 500=55.06%, 750=18.35%, 1000=1.69%, 2000=11.89% 00:33:46.929 lat (msec) : >=2000=12.92% 00:33:46.929 cpu : usr=0.07%, sys=1.91%, ctx=1266, majf=0, minf=32769 00:33:46.929 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.1% 00:33:46.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.929 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.929 issued rwts: total=1068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.929 job2: (groupid=0, jobs=1): err= 0: pid=2296586: Tue Jun 11 14:01:38 2024 00:33:46.929 read: IOPS=75, BW=75.0MiB/s (78.7MB/s)(801MiB/10674msec) 00:33:46.929 slat (usec): min=35, max=2222.8k, avg=13232.81, stdev=91271.59 00:33:46.929 clat (msec): min=70, max=5320, avg=1574.40, stdev=1468.33 00:33:46.929 lat (msec): min=626, max=5333, avg=1587.63, stdev=1470.98 00:33:46.929 clat percentiles (msec): 00:33:46.929 | 1.00th=[ 634], 5.00th=[ 659], 10.00th=[ 693], 20.00th=[ 751], 00:33:46.929 | 30.00th=[ 810], 40.00th=[ 844], 50.00th=[ 894], 60.00th=[ 936], 00:33:46.929 | 70.00th=[ 1284], 80.00th=[ 1603], 90.00th=[ 4866], 95.00th=[ 5067], 00:33:46.929 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5336], 99.95th=[ 5336], 00:33:46.929 | 99.99th=[ 5336] 00:33:46.929 bw ( KiB/s): min= 2048, max=198259, per=3.07%, avg=114804.42, stdev=69822.39, samples=12 00:33:46.929 iops : min= 2, max= 193, avg=112.00, stdev=68.11, samples=12 00:33:46.929 lat (msec) : 100=0.12%, 750=20.22%, 1000=43.57%, 2000=19.85%, >=2000=16.23% 00:33:46.929 cpu : usr=0.04%, sys=0.88%, ctx=1896, majf=0, minf=32769 00:33:46.929 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:33:46.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.929 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.929 issued rwts: total=801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.929 job2: (groupid=0, jobs=1): err= 0: pid=2296587: Tue Jun 11 14:01:38 2024 00:33:46.929 read: IOPS=91, BW=91.6MiB/s (96.0MB/s)(995MiB/10867msec) 00:33:46.929 slat (usec): min=44, max=2146.3k, avg=10843.28, stdev=83538.30 00:33:46.929 clat (msec): min=71, max=5235, avg=1338.88, stdev=1287.35 00:33:46.929 lat (msec): min=541, max=5241, avg=1349.72, stdev=1291.31 00:33:46.929 clat percentiles (msec): 00:33:46.929 | 1.00th=[ 542], 5.00th=[ 542], 10.00th=[ 550], 20.00th=[ 550], 00:33:46.929 | 30.00th=[ 575], 40.00th=[ 625], 50.00th=[ 743], 60.00th=[ 1011], 00:33:46.929 | 70.00th=[ 1150], 80.00th=[ 1636], 90.00th=[ 4396], 95.00th=[ 4732], 00:33:46.929 | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 5269], 99.95th=[ 5269], 00:33:46.929 | 99.99th=[ 5269] 00:33:46.929 bw ( KiB/s): min= 8192, max=243712, per=3.16%, avg=118359.93, stdev=76404.85, samples=15 00:33:46.929 iops : min= 8, max= 238, avg=115.53, stdev=74.62, samples=15 00:33:46.929 lat (msec) : 100=0.10%, 750=50.75%, 1000=8.64%, 2000=26.93%, >=2000=13.57% 00:33:46.929 cpu : usr=0.06%, sys=1.94%, ctx=1692, majf=0, minf=32769 00:33:46.929 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:33:46.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.929 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.929 issued rwts: total=995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.929 job2: (groupid=0, jobs=1): err= 0: pid=2296588: Tue Jun 11 14:01:38 2024 00:33:46.929 read: IOPS=35, BW=35.1MiB/s (36.8MB/s)(374MiB/10661msec) 00:33:46.929 slat (usec): min=31, max=2152.9k, avg=28307.74, stdev=217421.81 00:33:46.929 clat (msec): min=71, max=9059, avg=3408.12, stdev=3740.43 00:33:46.929 lat (msec): min=495, max=9063, avg=3436.43, stdev=3744.90 00:33:46.929 clat percentiles (msec): 00:33:46.929 | 1.00th=[ 493], 5.00th=[ 498], 10.00th=[ 498], 20.00th=[ 506], 00:33:46.929 | 30.00th=[ 518], 40.00th=[ 659], 50.00th=[ 835], 60.00th=[ 1011], 00:33:46.929 | 70.00th=[ 6946], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 8926], 00:33:46.929 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:33:46.929 | 99.99th=[ 9060] 00:33:46.929 bw ( KiB/s): min= 2048, max=257532, per=1.92%, avg=71898.86, stdev=108186.39, samples=7 00:33:46.929 iops : min= 2, max= 251, avg=70.14, stdev=105.51, samples=7 00:33:46.929 lat (msec) : 100=0.27%, 500=12.30%, 750=33.16%, 1000=13.10%, 2000=2.94% 00:33:46.929 lat (msec) : >=2000=38.24% 00:33:46.929 cpu : usr=0.00%, sys=0.82%, ctx=512, majf=0, minf=32769 00:33:46.929 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.3%, 32=8.6%, >=64=83.2% 00:33:46.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.929 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:33:46.929 issued rwts: total=374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.929 job2: (groupid=0, jobs=1): err= 0: pid=2296589: Tue Jun 11 14:01:38 2024 00:33:46.929 read: IOPS=49, BW=49.4MiB/s (51.8MB/s)(527MiB/10668msec) 00:33:46.929 slat (usec): min=33, max=2142.6k, avg=20131.30, stdev=130292.42 00:33:46.929 clat (msec): min=56, max=5311, avg=2351.12, stdev=1387.26 00:33:46.929 lat (msec): min=1031, max=5312, avg=2371.25, stdev=1384.33 00:33:46.929 clat percentiles (msec): 00:33:46.929 | 1.00th=[ 1036], 5.00th=[ 1070], 10.00th=[ 1116], 20.00th=[ 1200], 00:33:46.929 | 30.00th=[ 1435], 40.00th=[ 1703], 50.00th=[ 1787], 60.00th=[ 1854], 00:33:46.929 | 70.00th=[ 2333], 80.00th=[ 4396], 90.00th=[ 4732], 95.00th=[ 5067], 00:33:46.929 | 99.00th=[ 5269], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:33:46.929 | 99.99th=[ 5336] 00:33:46.929 bw ( KiB/s): min= 4096, max=137216, per=1.99%, avg=74258.64, stdev=46401.47, samples=11 00:33:46.929 iops : min= 4, max= 134, avg=72.36, stdev=45.32, samples=11 00:33:46.929 lat (msec) : 100=0.19%, 2000=62.24%, >=2000=37.57% 00:33:46.929 cpu : usr=0.02%, sys=0.92%, ctx=1514, majf=0, minf=32769 00:33:46.929 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.1%, >=64=88.0% 00:33:46.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.929 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:33:46.929 issued rwts: total=527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.929 job2: (groupid=0, jobs=1): err= 0: pid=2296591: Tue Jun 11 14:01:38 2024 00:33:46.929 read: IOPS=93, BW=93.7MiB/s (98.3MB/s)(1022MiB/10904msec) 00:33:46.929 slat (usec): min=30, max=2130.8k, avg=9782.33, stdev=94279.25 00:33:46.929 clat (msec): min=396, max=6952, avg=1320.90, stdev=1835.85 00:33:46.929 lat (msec): min=398, max=6959, avg=1330.68, stdev=1844.41 00:33:46.929 clat percentiles (msec): 00:33:46.929 | 1.00th=[ 401], 5.00th=[ 430], 10.00th=[ 464], 20.00th=[ 502], 00:33:46.929 | 30.00th=[ 506], 40.00th=[ 510], 50.00th=[ 514], 60.00th=[ 550], 00:33:46.929 | 70.00th=[ 701], 80.00th=[ 1070], 90.00th=[ 5738], 95.00th=[ 6007], 00:33:46.929 | 99.00th=[ 6812], 99.50th=[ 6879], 99.90th=[ 6946], 99.95th=[ 6946], 00:33:46.929 | 99.99th=[ 6946] 00:33:46.929 bw ( KiB/s): min= 4096, max=299008, per=4.08%, avg=152449.58, stdev=104067.75, samples=12 00:33:46.929 iops : min= 4, max= 292, avg=148.83, stdev=101.66, samples=12 00:33:46.929 lat (msec) : 500=18.59%, 750=52.45%, 1000=5.48%, 2000=10.27%, >=2000=13.21% 00:33:46.929 cpu : usr=0.02%, sys=2.06%, ctx=1370, majf=0, minf=32769 00:33:46.929 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:33:46.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.929 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.929 issued rwts: total=1022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.929 job2: (groupid=0, jobs=1): err= 0: pid=2296592: Tue Jun 11 14:01:38 2024 00:33:46.929 read: IOPS=172, BW=173MiB/s (181MB/s)(1729MiB/10009msec) 00:33:46.929 slat (usec): min=24, max=1868.9k, avg=5777.23, stdev=45896.53 00:33:46.929 clat (msec): min=7, max=2661, avg=594.35, stdev=341.70 00:33:46.929 lat (msec): min=8, max=2671, avg=600.13, stdev=346.60 00:33:46.929 clat percentiles (msec): 00:33:46.929 | 1.00th=[ 20], 5.00th=[ 68], 10.00th=[ 146], 20.00th=[ 300], 00:33:46.929 | 30.00th=[ 460], 40.00th=[ 592], 50.00th=[ 693], 60.00th=[ 735], 00:33:46.929 | 70.00th=[ 751], 80.00th=[ 785], 90.00th=[ 835], 95.00th=[ 860], 00:33:46.929 | 99.00th=[ 2567], 99.50th=[ 2635], 99.90th=[ 2668], 99.95th=[ 2668], 00:33:46.929 | 99.99th=[ 2668] 00:33:46.929 bw ( KiB/s): min=133120, max=335201, per=5.00%, avg=187127.64, stdev=52628.63, samples=14 00:33:46.929 iops : min= 130, max= 327, avg=182.57, stdev=51.39, samples=14 00:33:46.929 lat (msec) : 10=0.17%, 20=0.87%, 50=2.60%, 100=2.89%, 250=10.82% 00:33:46.929 lat (msec) : 500=15.21%, 750=36.32%, 1000=29.90%, >=2000=1.21% 00:33:46.929 cpu : usr=0.08%, sys=2.06%, ctx=1614, majf=0, minf=32769 00:33:46.929 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.4% 00:33:46.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.929 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.929 issued rwts: total=1729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.929 job2: (groupid=0, jobs=1): err= 0: pid=2296593: Tue Jun 11 14:01:38 2024 00:33:46.929 read: IOPS=44, BW=44.4MiB/s (46.6MB/s)(476MiB/10715msec) 00:33:46.929 slat (usec): min=27, max=2079.2k, avg=22395.73, stdev=158997.08 00:33:46.929 clat (msec): min=52, max=6489, avg=2147.28, stdev=1274.74 00:33:46.929 lat (msec): min=645, max=6491, avg=2169.67, stdev=1282.26 00:33:46.929 clat percentiles (msec): 00:33:46.929 | 1.00th=[ 651], 5.00th=[ 684], 10.00th=[ 693], 20.00th=[ 718], 00:33:46.929 | 30.00th=[ 751], 40.00th=[ 1754], 50.00th=[ 2433], 60.00th=[ 2802], 00:33:46.929 | 70.00th=[ 3104], 80.00th=[ 3339], 90.00th=[ 3574], 95.00th=[ 3842], 00:33:46.930 | 99.00th=[ 4329], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:33:46.930 | 99.99th=[ 6477] 00:33:46.930 bw ( KiB/s): min=10240, max=182272, per=2.12%, avg=79171.11, stdev=53020.03, samples=9 00:33:46.930 iops : min= 10, max= 178, avg=77.22, stdev=51.77, samples=9 00:33:46.930 lat (msec) : 100=0.21%, 750=30.25%, 1000=7.35%, 2000=7.56%, >=2000=54.62% 00:33:46.930 cpu : usr=0.01%, sys=1.05%, ctx=960, majf=0, minf=32769 00:33:46.930 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.7%, >=64=86.8% 00:33:46.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.930 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:33:46.930 issued rwts: total=476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.930 job2: (groupid=0, jobs=1): err= 0: pid=2296594: Tue Jun 11 14:01:38 2024 00:33:46.930 read: IOPS=21, BW=21.2MiB/s (22.2MB/s)(271MiB/12793msec) 00:33:46.930 slat (usec): min=119, max=2159.0k, avg=39319.81, stdev=252963.83 00:33:46.930 clat (msec): min=943, max=11686, avg=5770.98, stdev=4740.24 00:33:46.930 lat (msec): min=955, max=11690, avg=5810.30, stdev=4745.25 00:33:46.930 clat percentiles (msec): 00:33:46.930 | 1.00th=[ 953], 5.00th=[ 986], 10.00th=[ 1003], 20.00th=[ 1028], 00:33:46.930 | 30.00th=[ 1070], 40.00th=[ 1133], 50.00th=[ 4245], 60.00th=[ 9597], 00:33:46.930 | 70.00th=[10939], 80.00th=[11208], 90.00th=[11476], 95.00th=[11610], 00:33:46.930 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11745], 99.95th=[11745], 00:33:46.930 | 99.99th=[11745] 00:33:46.930 bw ( KiB/s): min= 2048, max=112415, per=0.98%, avg=36834.25, stdev=46830.01, samples=8 00:33:46.930 iops : min= 2, max= 109, avg=35.75, stdev=45.64, samples=8 00:33:46.930 lat (msec) : 1000=10.70%, 2000=35.42%, >=2000=53.87% 00:33:46.930 cpu : usr=0.00%, sys=0.70%, ctx=555, majf=0, minf=32769 00:33:46.930 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.8%, >=64=76.8% 00:33:46.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.930 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:33:46.930 issued rwts: total=271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.930 job2: (groupid=0, jobs=1): err= 0: pid=2296595: Tue Jun 11 14:01:38 2024 00:33:46.930 read: IOPS=55, BW=55.0MiB/s (57.7MB/s)(593MiB/10780msec) 00:33:46.930 slat (usec): min=30, max=1983.4k, avg=16881.21, stdev=88114.48 00:33:46.930 clat (msec): min=765, max=5169, avg=1645.54, stdev=848.52 00:33:46.930 lat (msec): min=850, max=5182, avg=1662.42, stdev=863.33 00:33:46.930 clat percentiles (msec): 00:33:46.930 | 1.00th=[ 852], 5.00th=[ 936], 10.00th=[ 944], 20.00th=[ 961], 00:33:46.930 | 30.00th=[ 1099], 40.00th=[ 1334], 50.00th=[ 1485], 60.00th=[ 1519], 00:33:46.930 | 70.00th=[ 1586], 80.00th=[ 2123], 90.00th=[ 2836], 95.00th=[ 3104], 00:33:46.930 | 99.00th=[ 5067], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:33:46.930 | 99.99th=[ 5201] 00:33:46.930 bw ( KiB/s): min=10240, max=143360, per=2.47%, avg=92248.10, stdev=46305.13, samples=10 00:33:46.930 iops : min= 10, max= 140, avg=89.90, stdev=45.22, samples=10 00:33:46.930 lat (msec) : 1000=24.28%, 2000=53.79%, >=2000=21.92% 00:33:46.930 cpu : usr=0.05%, sys=1.48%, ctx=892, majf=0, minf=32769 00:33:46.930 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:33:46.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.930 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:33:46.930 issued rwts: total=593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.930 job2: (groupid=0, jobs=1): err= 0: pid=2296596: Tue Jun 11 14:01:38 2024 00:33:46.930 read: IOPS=68, BW=68.6MiB/s (71.9MB/s)(733MiB/10685msec) 00:33:46.930 slat (usec): min=26, max=2095.4k, avg=14474.71, stdev=108846.99 00:33:46.930 clat (msec): min=71, max=5293, avg=1744.54, stdev=1430.19 00:33:46.930 lat (msec): min=494, max=5296, avg=1759.01, stdev=1431.79 00:33:46.930 clat percentiles (msec): 00:33:46.930 | 1.00th=[ 502], 5.00th=[ 550], 10.00th=[ 718], 20.00th=[ 902], 00:33:46.930 | 30.00th=[ 978], 40.00th=[ 1116], 50.00th=[ 1250], 60.00th=[ 1318], 00:33:46.930 | 70.00th=[ 1385], 80.00th=[ 1569], 90.00th=[ 4732], 95.00th=[ 5067], 00:33:46.930 | 99.00th=[ 5201], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:33:46.930 | 99.99th=[ 5269] 00:33:46.930 bw ( KiB/s): min= 6144, max=262144, per=2.55%, avg=95301.62, stdev=66429.30, samples=13 00:33:46.930 iops : min= 6, max= 256, avg=93.00, stdev=64.91, samples=13 00:33:46.930 lat (msec) : 100=0.14%, 500=0.95%, 750=10.78%, 1000=19.37%, 2000=50.34% 00:33:46.930 lat (msec) : >=2000=18.42% 00:33:46.930 cpu : usr=0.05%, sys=0.85%, ctx=1545, majf=0, minf=32769 00:33:46.930 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:33:46.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.930 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:33:46.930 issued rwts: total=733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.930 job2: (groupid=0, jobs=1): err= 0: pid=2296597: Tue Jun 11 14:01:38 2024 00:33:46.930 read: IOPS=22, BW=22.2MiB/s (23.3MB/s)(243MiB/10935msec) 00:33:46.930 slat (usec): min=655, max=2129.4k, avg=44758.16, stdev=229081.15 00:33:46.930 clat (msec): min=56, max=6845, avg=4658.75, stdev=1227.29 00:33:46.930 lat (msec): min=2185, max=6857, avg=4703.51, stdev=1186.50 00:33:46.930 clat percentiles (msec): 00:33:46.930 | 1.00th=[ 2198], 5.00th=[ 2702], 10.00th=[ 2802], 20.00th=[ 3071], 00:33:46.930 | 30.00th=[ 4396], 40.00th=[ 4665], 50.00th=[ 4933], 60.00th=[ 5201], 00:33:46.930 | 70.00th=[ 5336], 80.00th=[ 5604], 90.00th=[ 6141], 95.00th=[ 6544], 00:33:46.930 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6879], 99.95th=[ 6879], 00:33:46.930 | 99.99th=[ 6879] 00:33:46.930 bw ( KiB/s): min= 4096, max=69493, per=0.90%, avg=33623.43, stdev=25298.39, samples=7 00:33:46.930 iops : min= 4, max= 67, avg=32.57, stdev=24.67, samples=7 00:33:46.930 lat (msec) : 100=0.41%, >=2000=99.59% 00:33:46.930 cpu : usr=0.02%, sys=1.47%, ctx=588, majf=0, minf=32207 00:33:46.930 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.3%, 16=6.6%, 32=13.2%, >=64=74.1% 00:33:46.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.930 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:33:46.930 issued rwts: total=243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.930 job3: (groupid=0, jobs=1): err= 0: pid=2296602: Tue Jun 11 14:01:38 2024 00:33:46.930 read: IOPS=27, BW=27.5MiB/s (28.8MB/s)(292MiB/10633msec) 00:33:46.930 slat (usec): min=380, max=2078.0k, avg=36153.49, stdev=211507.78 00:33:46.930 clat (msec): min=74, max=7888, avg=4244.57, stdev=3031.58 00:33:46.930 lat (msec): min=956, max=8309, avg=4280.72, stdev=3035.95 00:33:46.930 clat percentiles (msec): 00:33:46.930 | 1.00th=[ 961], 5.00th=[ 1070], 10.00th=[ 1183], 20.00th=[ 1452], 00:33:46.930 | 30.00th=[ 1586], 40.00th=[ 1687], 50.00th=[ 1972], 60.00th=[ 7483], 00:33:46.930 | 70.00th=[ 7684], 80.00th=[ 7752], 90.00th=[ 7819], 95.00th=[ 7886], 00:33:46.930 | 99.00th=[ 7886], 99.50th=[ 7886], 99.90th=[ 7886], 99.95th=[ 7886], 00:33:46.930 | 99.99th=[ 7886] 00:33:46.930 bw ( KiB/s): min= 4096, max=116736, per=1.08%, avg=40356.38, stdev=41452.42, samples=8 00:33:46.930 iops : min= 4, max= 114, avg=39.38, stdev=40.49, samples=8 00:33:46.930 lat (msec) : 100=0.34%, 1000=1.37%, 2000=48.63%, >=2000=49.66% 00:33:46.930 cpu : usr=0.03%, sys=0.60%, ctx=844, majf=0, minf=32769 00:33:46.930 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:33:46.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.930 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:33:46.930 issued rwts: total=292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.930 job3: (groupid=0, jobs=1): err= 0: pid=2296603: Tue Jun 11 14:01:38 2024 00:33:46.930 read: IOPS=7, BW=7597KiB/s (7779kB/s)(81.0MiB/10918msec) 00:33:46.930 slat (usec): min=657, max=2141.4k, avg=133980.40, stdev=502708.11 00:33:46.930 clat (msec): min=64, max=10916, avg=9573.60, stdev=2844.38 00:33:46.930 lat (msec): min=2083, max=10917, avg=9707.58, stdev=2639.05 00:33:46.930 clat percentiles (msec): 00:33:46.930 | 1.00th=[ 65], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[10671], 00:33:46.930 | 30.00th=[10671], 40.00th=[10805], 50.00th=[10805], 60.00th=[10805], 00:33:46.930 | 70.00th=[10805], 80.00th=[10939], 90.00th=[10939], 95.00th=[10939], 00:33:46.930 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:33:46.931 | 99.99th=[10939] 00:33:46.931 lat (msec) : 100=1.23%, >=2000=98.77% 00:33:46.931 cpu : usr=0.00%, sys=1.15%, ctx=125, majf=0, minf=20737 00:33:46.931 IO depths : 1=1.2%, 2=2.5%, 4=4.9%, 8=9.9%, 16=19.8%, 32=39.5%, >=64=22.2% 00:33:46.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.931 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:33:46.931 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.931 job3: (groupid=0, jobs=1): err= 0: pid=2296604: Tue Jun 11 14:01:38 2024 00:33:46.931 read: IOPS=232, BW=232MiB/s (243MB/s)(2497MiB/10760msec) 00:33:46.931 slat (usec): min=23, max=2088.7k, avg=4273.17, stdev=42022.45 00:33:46.931 clat (msec): min=76, max=2492, avg=518.99, stdev=486.14 00:33:46.931 lat (msec): min=294, max=2498, avg=523.26, stdev=488.01 00:33:46.931 clat percentiles (msec): 00:33:46.931 | 1.00th=[ 296], 5.00th=[ 296], 10.00th=[ 300], 20.00th=[ 300], 00:33:46.931 | 30.00th=[ 300], 40.00th=[ 305], 50.00th=[ 309], 60.00th=[ 334], 00:33:46.931 | 70.00th=[ 401], 80.00th=[ 443], 90.00th=[ 1116], 95.00th=[ 2198], 00:33:46.931 | 99.00th=[ 2433], 99.50th=[ 2467], 99.90th=[ 2500], 99.95th=[ 2500], 00:33:46.931 | 99.99th=[ 2500] 00:33:46.931 bw ( KiB/s): min=47104, max=437397, per=8.10%, avg=303122.87, stdev=147631.94, samples=16 00:33:46.931 iops : min= 46, max= 427, avg=296.00, stdev=144.15, samples=16 00:33:46.931 lat (msec) : 100=0.04%, 500=81.22%, 750=2.88%, 1000=3.00%, 2000=7.77% 00:33:46.931 lat (msec) : >=2000=5.09% 00:33:46.931 cpu : usr=0.17%, sys=2.76%, ctx=2829, majf=0, minf=32769 00:33:46.931 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:33:46.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.931 issued rwts: total=2497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.931 job3: (groupid=0, jobs=1): err= 0: pid=2296605: Tue Jun 11 14:01:38 2024 00:33:46.931 read: IOPS=16, BW=16.1MiB/s (16.9MB/s)(174MiB/10777msec) 00:33:46.931 slat (usec): min=266, max=2147.8k, avg=61588.85, stdev=303931.93 00:33:46.931 clat (msec): min=59, max=8309, avg=6091.01, stdev=2121.47 00:33:46.931 lat (msec): min=1765, max=8313, avg=6152.60, stdev=2054.64 00:33:46.931 clat percentiles (msec): 00:33:46.931 | 1.00th=[ 1754], 5.00th=[ 1787], 10.00th=[ 2089], 20.00th=[ 3675], 00:33:46.931 | 30.00th=[ 4329], 40.00th=[ 6812], 50.00th=[ 7080], 60.00th=[ 7282], 00:33:46.931 | 70.00th=[ 7617], 80.00th=[ 7819], 90.00th=[ 8020], 95.00th=[ 8154], 00:33:46.931 | 99.00th=[ 8288], 99.50th=[ 8288], 99.90th=[ 8288], 99.95th=[ 8288], 00:33:46.931 | 99.99th=[ 8288] 00:33:46.931 bw ( KiB/s): min= 2048, max=55296, per=0.63%, avg=23552.00, stdev=22589.98, samples=4 00:33:46.931 iops : min= 2, max= 54, avg=23.00, stdev=22.06, samples=4 00:33:46.931 lat (msec) : 100=0.57%, 2000=5.75%, >=2000=93.68% 00:33:46.931 cpu : usr=0.03%, sys=0.70%, ctx=451, majf=0, minf=32769 00:33:46.931 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.6%, 16=9.2%, 32=18.4%, >=64=63.8% 00:33:46.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.931 complete : 0=0.0%, 4=97.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.1% 00:33:46.931 issued rwts: total=174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.931 job3: (groupid=0, jobs=1): err= 0: pid=2296606: Tue Jun 11 14:01:38 2024 00:33:46.931 read: IOPS=144, BW=145MiB/s (152MB/s)(1455MiB/10051msec) 00:33:46.931 slat (usec): min=34, max=85184, avg=6870.98, stdev=14236.50 00:33:46.931 clat (msec): min=42, max=1173, avg=848.10, stdev=241.32 00:33:46.931 lat (msec): min=63, max=1177, avg=854.97, stdev=242.68 00:33:46.931 clat percentiles (msec): 00:33:46.931 | 1.00th=[ 140], 5.00th=[ 542], 10.00th=[ 550], 20.00th=[ 558], 00:33:46.931 | 30.00th=[ 625], 40.00th=[ 869], 50.00th=[ 936], 60.00th=[ 961], 00:33:46.931 | 70.00th=[ 1053], 80.00th=[ 1083], 90.00th=[ 1083], 95.00th=[ 1099], 00:33:46.931 | 99.00th=[ 1167], 99.50th=[ 1167], 99.90th=[ 1167], 99.95th=[ 1167], 00:33:46.931 | 99.99th=[ 1167] 00:33:46.931 bw ( KiB/s): min=108544, max=235520, per=3.84%, avg=143786.67, stdev=37007.61, samples=18 00:33:46.931 iops : min= 106, max= 230, avg=140.33, stdev=36.18, samples=18 00:33:46.931 lat (msec) : 50=0.07%, 100=0.07%, 250=1.72%, 500=2.47%, 750=29.07% 00:33:46.931 lat (msec) : 1000=30.45%, 2000=36.15% 00:33:46.931 cpu : usr=0.08%, sys=2.59%, ctx=1352, majf=0, minf=32769 00:33:46.931 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:33:46.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.931 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.931 issued rwts: total=1455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.931 job3: (groupid=0, jobs=1): err= 0: pid=2296608: Tue Jun 11 14:01:38 2024 00:33:46.931 read: IOPS=22, BW=22.2MiB/s (23.2MB/s)(236MiB/10644msec) 00:33:46.931 slat (usec): min=541, max=2105.9k, avg=42372.50, stdev=267168.67 00:33:46.931 clat (msec): min=494, max=9579, avg=1405.36, stdev=1900.77 00:33:46.931 lat (msec): min=498, max=9590, avg=1447.74, stdev=1978.02 00:33:46.931 clat percentiles (msec): 00:33:46.931 | 1.00th=[ 518], 5.00th=[ 558], 10.00th=[ 617], 20.00th=[ 667], 00:33:46.931 | 30.00th=[ 709], 40.00th=[ 760], 50.00th=[ 802], 60.00th=[ 860], 00:33:46.931 | 70.00th=[ 944], 80.00th=[ 1053], 90.00th=[ 3339], 95.00th=[ 7550], 00:33:46.931 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:33:46.931 | 99.99th=[ 9597] 00:33:46.931 bw ( KiB/s): min=204424, max=204424, per=5.47%, avg=204424.00, stdev= 0.00, samples=1 00:33:46.931 iops : min= 199, max= 199, avg=199.00, stdev= 0.00, samples=1 00:33:46.931 lat (msec) : 500=0.85%, 750=36.44%, 1000=39.41%, 2000=11.44%, >=2000=11.86% 00:33:46.931 cpu : usr=0.00%, sys=0.65%, ctx=580, majf=0, minf=32769 00:33:46.931 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.6%, >=64=73.3% 00:33:46.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.931 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:33:46.931 issued rwts: total=236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.931 job3: (groupid=0, jobs=1): err= 0: pid=2296609: Tue Jun 11 14:01:38 2024 00:33:46.931 read: IOPS=36, BW=36.2MiB/s (37.9MB/s)(363MiB/10036msec) 00:33:46.931 slat (usec): min=29, max=2126.0k, avg=27559.07, stdev=189455.32 00:33:46.931 clat (msec): min=29, max=5943, avg=1108.86, stdev=844.75 00:33:46.931 lat (msec): min=42, max=7968, avg=1136.42, stdev=930.75 00:33:46.931 clat percentiles (msec): 00:33:46.931 | 1.00th=[ 45], 5.00th=[ 73], 10.00th=[ 106], 20.00th=[ 201], 00:33:46.931 | 30.00th=[ 659], 40.00th=[ 1028], 50.00th=[ 1284], 60.00th=[ 1401], 00:33:46.931 | 70.00th=[ 1469], 80.00th=[ 1620], 90.00th=[ 1687], 95.00th=[ 1720], 00:33:46.931 | 99.00th=[ 5940], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:33:46.931 | 99.99th=[ 5940] 00:33:46.931 bw ( KiB/s): min=10240, max=96256, per=1.93%, avg=72147.00, stdev=41354.35, samples=4 00:33:46.931 iops : min= 10, max= 94, avg=70.25, stdev=40.27, samples=4 00:33:46.931 lat (msec) : 50=2.75%, 100=5.79%, 250=12.12%, 500=4.41%, 750=6.34% 00:33:46.931 lat (msec) : 1000=7.71%, 2000=58.68%, >=2000=2.20% 00:33:46.931 cpu : usr=0.05%, sys=0.80%, ctx=583, majf=0, minf=32769 00:33:46.931 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.6% 00:33:46.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.931 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:33:46.931 issued rwts: total=363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.931 job3: (groupid=0, jobs=1): err= 0: pid=2296610: Tue Jun 11 14:01:38 2024 00:33:46.931 read: IOPS=60, BW=60.6MiB/s (63.5MB/s)(652MiB/10762msec) 00:33:46.931 slat (usec): min=27, max=2110.0k, avg=16390.96, stdev=139204.89 00:33:46.931 clat (msec): min=70, max=5131, avg=1300.24, stdev=946.22 00:33:46.931 lat (msec): min=726, max=5139, avg=1316.63, stdev=956.51 00:33:46.931 clat percentiles (msec): 00:33:46.931 | 1.00th=[ 726], 5.00th=[ 726], 10.00th=[ 735], 20.00th=[ 743], 00:33:46.931 | 30.00th=[ 785], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 936], 00:33:46.931 | 70.00th=[ 995], 80.00th=[ 2299], 90.00th=[ 2735], 95.00th=[ 2937], 00:33:46.931 | 99.00th=[ 5067], 99.50th=[ 5067], 99.90th=[ 5134], 99.95th=[ 5134], 00:33:46.931 | 99.99th=[ 5134] 00:33:46.931 bw ( KiB/s): min= 8192, max=180224, per=3.58%, avg=134072.75, stdev=54283.32, samples=8 00:33:46.931 iops : min= 8, max= 176, avg=130.75, stdev=52.98, samples=8 00:33:46.931 lat (msec) : 100=0.15%, 750=24.39%, 1000=46.32%, 2000=6.44%, >=2000=22.70% 00:33:46.931 cpu : usr=0.00%, sys=1.15%, ctx=688, majf=0, minf=32769 00:33:46.931 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:33:46.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.931 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:33:46.931 issued rwts: total=652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.931 job3: (groupid=0, jobs=1): err= 0: pid=2296611: Tue Jun 11 14:01:38 2024 00:33:46.931 read: IOPS=2, BW=2801KiB/s (2868kB/s)(29.0MiB/10601msec) 00:33:46.931 slat (usec): min=678, max=2103.7k, avg=362947.16, stdev=790403.29 00:33:46.931 clat (msec): min=74, max=8615, avg=4534.17, stdev=2516.51 00:33:46.931 lat (msec): min=2131, max=10600, avg=4897.12, stdev=2607.79 00:33:46.931 clat percentiles (msec): 00:33:46.931 | 1.00th=[ 75], 5.00th=[ 2140], 10.00th=[ 2140], 20.00th=[ 2165], 00:33:46.931 | 30.00th=[ 2165], 40.00th=[ 2198], 50.00th=[ 4329], 60.00th=[ 6409], 00:33:46.931 | 70.00th=[ 6477], 80.00th=[ 6477], 90.00th=[ 8557], 95.00th=[ 8658], 00:33:46.931 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:33:46.931 | 99.99th=[ 8658] 00:33:46.931 lat (msec) : 100=3.45%, >=2000=96.55% 00:33:46.931 cpu : usr=0.00%, sys=0.22%, ctx=63, majf=0, minf=7425 00:33:46.931 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:33:46.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.931 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:33:46.931 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.931 job3: (groupid=0, jobs=1): err= 0: pid=2296612: Tue Jun 11 14:01:38 2024 00:33:46.931 read: IOPS=40, BW=40.2MiB/s (42.1MB/s)(429MiB/10678msec) 00:33:46.931 slat (usec): min=54, max=1976.3k, avg=23306.53, stdev=138826.15 00:33:46.932 clat (msec): min=677, max=6818, avg=2912.12, stdev=2234.36 00:33:46.932 lat (msec): min=679, max=6862, avg=2935.43, stdev=2243.87 00:33:46.932 clat percentiles (msec): 00:33:46.932 | 1.00th=[ 693], 5.00th=[ 735], 10.00th=[ 785], 20.00th=[ 936], 00:33:46.932 | 30.00th=[ 1116], 40.00th=[ 1603], 50.00th=[ 1955], 60.00th=[ 2198], 00:33:46.932 | 70.00th=[ 3507], 80.00th=[ 6275], 90.00th=[ 6611], 95.00th=[ 6745], 00:33:46.932 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:33:46.932 | 99.99th=[ 6812] 00:33:46.932 bw ( KiB/s): min= 2048, max=192231, per=1.45%, avg=54339.55, stdev=50107.97, samples=11 00:33:46.932 iops : min= 2, max= 187, avg=53.00, stdev=48.73, samples=11 00:33:46.932 lat (msec) : 750=6.53%, 1000=15.85%, 2000=28.44%, >=2000=49.18% 00:33:46.932 cpu : usr=0.05%, sys=0.79%, ctx=1203, majf=0, minf=32769 00:33:46.932 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.5%, >=64=85.3% 00:33:46.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.932 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:33:46.932 issued rwts: total=429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.932 job3: (groupid=0, jobs=1): err= 0: pid=2296613: Tue Jun 11 14:01:38 2024 00:33:46.932 read: IOPS=79, BW=79.2MiB/s (83.0MB/s)(798MiB/10082msec) 00:33:46.932 slat (usec): min=29, max=122908, avg=12546.79, stdev=25548.73 00:33:46.932 clat (msec): min=64, max=5200, avg=1502.70, stdev=490.30 00:33:46.932 lat (msec): min=127, max=5212, avg=1515.25, stdev=490.54 00:33:46.932 clat percentiles (msec): 00:33:46.932 | 1.00th=[ 171], 5.00th=[ 877], 10.00th=[ 1070], 20.00th=[ 1083], 00:33:46.932 | 30.00th=[ 1183], 40.00th=[ 1351], 50.00th=[ 1552], 60.00th=[ 1636], 00:33:46.932 | 70.00th=[ 1754], 80.00th=[ 1821], 90.00th=[ 2056], 95.00th=[ 2165], 00:33:46.932 | 99.00th=[ 3775], 99.50th=[ 3809], 99.90th=[ 5201], 99.95th=[ 5201], 00:33:46.932 | 99.99th=[ 5201] 00:33:46.932 bw ( KiB/s): min=36864, max=126976, per=2.21%, avg=82688.00, stdev=30311.09, samples=16 00:33:46.932 iops : min= 36, max= 124, avg=80.75, stdev=29.60, samples=16 00:33:46.932 lat (msec) : 100=0.13%, 250=1.00%, 500=1.50%, 750=1.63%, 1000=1.00% 00:33:46.932 lat (msec) : 2000=82.33%, >=2000=12.41% 00:33:46.932 cpu : usr=0.05%, sys=1.76%, ctx=1422, majf=0, minf=32769 00:33:46.932 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:33:46.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.932 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.932 issued rwts: total=798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.932 job3: (groupid=0, jobs=1): err= 0: pid=2296614: Tue Jun 11 14:01:38 2024 00:33:46.932 read: IOPS=23, BW=23.4MiB/s (24.5MB/s)(251MiB/10734msec) 00:33:46.932 slat (usec): min=44, max=2121.3k, avg=42476.78, stdev=254636.04 00:33:46.932 clat (msec): min=71, max=7203, avg=4195.24, stdev=2595.55 00:33:46.932 lat (msec): min=998, max=7209, avg=4237.72, stdev=2578.56 00:33:46.932 clat percentiles (msec): 00:33:46.932 | 1.00th=[ 978], 5.00th=[ 1150], 10.00th=[ 1250], 20.00th=[ 1385], 00:33:46.932 | 30.00th=[ 1452], 40.00th=[ 1603], 50.00th=[ 5067], 60.00th=[ 6611], 00:33:46.932 | 70.00th=[ 6678], 80.00th=[ 6812], 90.00th=[ 7013], 95.00th=[ 7080], 00:33:46.932 | 99.00th=[ 7148], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:33:46.932 | 99.99th=[ 7215] 00:33:46.932 bw ( KiB/s): min= 2048, max=137216, per=1.68%, avg=62976.00, stdev=65415.89, samples=4 00:33:46.932 iops : min= 2, max= 134, avg=61.50, stdev=63.88, samples=4 00:33:46.932 lat (msec) : 100=0.40%, 1000=2.39%, 2000=37.85%, >=2000=59.36% 00:33:46.932 cpu : usr=0.01%, sys=0.65%, ctx=485, majf=0, minf=32769 00:33:46.932 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.7%, >=64=74.9% 00:33:46.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.932 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:33:46.932 issued rwts: total=251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.932 job3: (groupid=0, jobs=1): err= 0: pid=2296615: Tue Jun 11 14:01:38 2024 00:33:46.932 read: IOPS=47, BW=47.7MiB/s (50.1MB/s)(521MiB/10911msec) 00:33:46.932 slat (usec): min=32, max=2139.4k, avg=20800.61, stdev=161952.20 00:33:46.932 clat (msec): min=71, max=8220, avg=2565.26, stdev=2951.21 00:33:46.932 lat (msec): min=735, max=8221, avg=2586.06, stdev=2956.67 00:33:46.932 clat percentiles (msec): 00:33:46.932 | 1.00th=[ 735], 5.00th=[ 760], 10.00th=[ 810], 20.00th=[ 860], 00:33:46.932 | 30.00th=[ 877], 40.00th=[ 894], 50.00th=[ 919], 60.00th=[ 927], 00:33:46.932 | 70.00th=[ 961], 80.00th=[ 7483], 90.00th=[ 7886], 95.00th=[ 8020], 00:33:46.932 | 99.00th=[ 8154], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:33:46.932 | 99.99th=[ 8221] 00:33:46.932 bw ( KiB/s): min= 2048, max=161792, per=2.15%, avg=80485.50, stdev=68001.37, samples=10 00:33:46.932 iops : min= 2, max= 158, avg=78.50, stdev=66.53, samples=10 00:33:46.932 lat (msec) : 100=0.19%, 750=3.65%, 1000=70.44%, 2000=0.58%, >=2000=25.14% 00:33:46.932 cpu : usr=0.00%, sys=1.52%, ctx=948, majf=0, minf=32769 00:33:46.932 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=87.9% 00:33:46.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.932 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:33:46.932 issued rwts: total=521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.932 job4: (groupid=0, jobs=1): err= 0: pid=2296632: Tue Jun 11 14:01:38 2024 00:33:46.932 read: IOPS=29, BW=29.2MiB/s (30.6MB/s)(312MiB/10682msec) 00:33:46.932 slat (usec): min=25, max=2056.4k, avg=33984.86, stdev=196406.96 00:33:46.932 clat (msec): min=76, max=5735, avg=3125.87, stdev=1618.84 00:33:46.932 lat (msec): min=1245, max=5758, avg=3159.85, stdev=1609.74 00:33:46.932 clat percentiles (msec): 00:33:46.932 | 1.00th=[ 1318], 5.00th=[ 1435], 10.00th=[ 1485], 20.00th=[ 1586], 00:33:46.932 | 30.00th=[ 1838], 40.00th=[ 1989], 50.00th=[ 2165], 60.00th=[ 4212], 00:33:46.932 | 70.00th=[ 4665], 80.00th=[ 5134], 90.00th=[ 5336], 95.00th=[ 5470], 00:33:46.932 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5738], 99.95th=[ 5738], 00:33:46.932 | 99.99th=[ 5738] 00:33:46.932 bw ( KiB/s): min= 4096, max=102400, per=1.44%, avg=53833.14, stdev=38870.92, samples=7 00:33:46.932 iops : min= 4, max= 100, avg=52.57, stdev=37.96, samples=7 00:33:46.932 lat (msec) : 100=0.32%, 2000=40.38%, >=2000=59.29% 00:33:46.932 cpu : usr=0.02%, sys=0.90%, ctx=917, majf=0, minf=32769 00:33:46.932 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.3%, >=64=79.8% 00:33:46.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.932 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:33:46.932 issued rwts: total=312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.932 job4: (groupid=0, jobs=1): err= 0: pid=2296633: Tue Jun 11 14:01:38 2024 00:33:46.932 read: IOPS=61, BW=61.1MiB/s (64.1MB/s)(651MiB/10649msec) 00:33:46.932 slat (usec): min=88, max=2172.6k, avg=16239.19, stdev=118122.16 00:33:46.932 clat (msec): min=75, max=5164, avg=1914.08, stdev=1478.67 00:33:46.932 lat (msec): min=698, max=5171, avg=1930.32, stdev=1479.18 00:33:46.932 clat percentiles (msec): 00:33:46.932 | 1.00th=[ 701], 5.00th=[ 709], 10.00th=[ 735], 20.00th=[ 802], 00:33:46.932 | 30.00th=[ 877], 40.00th=[ 1217], 50.00th=[ 1334], 60.00th=[ 1418], 00:33:46.932 | 70.00th=[ 1921], 80.00th=[ 2106], 90.00th=[ 4799], 95.00th=[ 5000], 00:33:46.932 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5134], 99.95th=[ 5134], 00:33:46.932 | 99.99th=[ 5134] 00:33:46.932 bw ( KiB/s): min= 2048, max=182272, per=2.86%, avg=107110.40, stdev=52800.20, samples=10 00:33:46.932 iops : min= 2, max= 178, avg=104.60, stdev=51.56, samples=10 00:33:46.932 lat (msec) : 100=0.15%, 750=15.51%, 1000=17.97%, 2000=39.48%, >=2000=26.88% 00:33:46.932 cpu : usr=0.02%, sys=0.83%, ctx=2050, majf=0, minf=32769 00:33:46.932 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:33:46.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.932 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:33:46.932 issued rwts: total=651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.932 job4: (groupid=0, jobs=1): err= 0: pid=2296634: Tue Jun 11 14:01:38 2024 00:33:46.932 read: IOPS=222, BW=223MiB/s (233MB/s)(2229MiB/10012msec) 00:33:46.932 slat (usec): min=23, max=72415, avg=4480.60, stdev=4329.79 00:33:46.932 clat (msec): min=10, max=1048, avg=540.71, stdev=263.61 00:33:46.932 lat (msec): min=11, max=1063, avg=545.19, stdev=265.63 00:33:46.932 clat percentiles (msec): 00:33:46.932 | 1.00th=[ 28], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 368], 00:33:46.932 | 30.00th=[ 397], 40.00th=[ 575], 50.00th=[ 584], 60.00th=[ 592], 00:33:46.932 | 70.00th=[ 651], 80.00th=[ 776], 90.00th=[ 869], 95.00th=[ 1003], 00:33:46.932 | 99.00th=[ 1028], 99.50th=[ 1028], 99.90th=[ 1036], 99.95th=[ 1045], 00:33:46.932 | 99.99th=[ 1053] 00:33:46.932 bw ( KiB/s): min=118784, max=333824, per=5.47%, avg=204559.06, stdev=57562.47, samples=17 00:33:46.932 iops : min= 116, max= 326, avg=199.76, stdev=56.21, samples=17 00:33:46.932 lat (msec) : 20=0.54%, 50=2.11%, 100=10.50%, 250=4.22%, 500=16.38% 00:33:46.932 lat (msec) : 750=42.66%, 1000=18.35%, 2000=5.25% 00:33:46.932 cpu : usr=0.16%, sys=3.22%, ctx=3892, majf=0, minf=32769 00:33:46.932 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:33:46.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.932 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.932 job4: (groupid=0, jobs=1): err= 0: pid=2296635: Tue Jun 11 14:01:38 2024 00:33:46.932 read: IOPS=60, BW=60.7MiB/s (63.6MB/s)(645MiB/10629msec) 00:33:46.932 slat (usec): min=28, max=2108.8k, avg=16356.23, stdev=88169.68 00:33:46.932 clat (msec): min=76, max=3647, avg=1846.15, stdev=821.09 00:33:46.932 lat (msec): min=831, max=3647, avg=1862.50, stdev=818.44 00:33:46.932 clat percentiles (msec): 00:33:46.932 | 1.00th=[ 835], 5.00th=[ 835], 10.00th=[ 844], 20.00th=[ 927], 00:33:46.932 | 30.00th=[ 1334], 40.00th=[ 1687], 50.00th=[ 1754], 60.00th=[ 1838], 00:33:46.932 | 70.00th=[ 2123], 80.00th=[ 2366], 90.00th=[ 3239], 95.00th=[ 3440], 00:33:46.932 | 99.00th=[ 3574], 99.50th=[ 3641], 99.90th=[ 3641], 99.95th=[ 3641], 00:33:46.932 | 99.99th=[ 3641] 00:33:46.932 bw ( KiB/s): min= 2048, max=157696, per=2.18%, avg=81425.92, stdev=47874.04, samples=13 00:33:46.932 iops : min= 2, max= 154, avg=79.38, stdev=46.79, samples=13 00:33:46.932 lat (msec) : 100=0.16%, 1000=27.75%, 2000=38.29%, >=2000=33.80% 00:33:46.933 cpu : usr=0.05%, sys=0.81%, ctx=1851, majf=0, minf=32769 00:33:46.933 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:33:46.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.933 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:33:46.933 issued rwts: total=645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.933 job4: (groupid=0, jobs=1): err= 0: pid=2296636: Tue Jun 11 14:01:38 2024 00:33:46.933 read: IOPS=92, BW=92.6MiB/s (97.1MB/s)(1009MiB/10892msec) 00:33:46.933 slat (usec): min=28, max=2069.0k, avg=10710.66, stdev=91561.08 00:33:46.933 clat (msec): min=79, max=4933, avg=1323.83, stdev=1241.87 00:33:46.933 lat (msec): min=556, max=4934, avg=1334.54, stdev=1244.44 00:33:46.933 clat percentiles (msec): 00:33:46.933 | 1.00th=[ 558], 5.00th=[ 584], 10.00th=[ 634], 20.00th=[ 667], 00:33:46.933 | 30.00th=[ 701], 40.00th=[ 852], 50.00th=[ 927], 60.00th=[ 961], 00:33:46.933 | 70.00th=[ 995], 80.00th=[ 1133], 90.00th=[ 4396], 95.00th=[ 4665], 00:33:46.933 | 99.00th=[ 4866], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:33:46.933 | 99.99th=[ 4933] 00:33:46.933 bw ( KiB/s): min=16416, max=225280, per=3.45%, avg=129016.71, stdev=64515.92, samples=14 00:33:46.933 iops : min= 16, max= 220, avg=125.86, stdev=62.96, samples=14 00:33:46.933 lat (msec) : 100=0.10%, 750=30.53%, 1000=40.04%, 2000=15.96%, >=2000=13.38% 00:33:46.933 cpu : usr=0.04%, sys=1.88%, ctx=2059, majf=0, minf=32769 00:33:46.933 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:33:46.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.933 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.933 issued rwts: total=1009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.933 job4: (groupid=0, jobs=1): err= 0: pid=2296637: Tue Jun 11 14:01:38 2024 00:33:46.933 read: IOPS=89, BW=89.1MiB/s (93.5MB/s)(902MiB/10118msec) 00:33:46.933 slat (usec): min=40, max=1975.1k, avg=11107.24, stdev=67974.61 00:33:46.933 clat (msec): min=91, max=4015, avg=1154.81, stdev=692.64 00:33:46.933 lat (msec): min=166, max=4018, avg=1165.92, stdev=698.61 00:33:46.933 clat percentiles (msec): 00:33:46.933 | 1.00th=[ 186], 5.00th=[ 439], 10.00th=[ 785], 20.00th=[ 894], 00:33:46.933 | 30.00th=[ 936], 40.00th=[ 953], 50.00th=[ 1003], 60.00th=[ 1070], 00:33:46.933 | 70.00th=[ 1099], 80.00th=[ 1150], 90.00th=[ 1620], 95.00th=[ 3641], 00:33:46.933 | 99.00th=[ 3977], 99.50th=[ 4010], 99.90th=[ 4010], 99.95th=[ 4010], 00:33:46.933 | 99.99th=[ 4010] 00:33:46.933 bw ( KiB/s): min=40878, max=157696, per=3.25%, avg=121612.25, stdev=29076.51, samples=12 00:33:46.933 iops : min= 39, max= 154, avg=118.50, stdev=28.56, samples=12 00:33:46.933 lat (msec) : 100=0.11%, 250=1.66%, 500=3.44%, 750=3.33%, 1000=40.35% 00:33:46.933 lat (msec) : 2000=45.90%, >=2000=5.21% 00:33:46.933 cpu : usr=0.03%, sys=2.39%, ctx=955, majf=0, minf=32769 00:33:46.933 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:33:46.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.933 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.933 issued rwts: total=902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.933 job4: (groupid=0, jobs=1): err= 0: pid=2296638: Tue Jun 11 14:01:38 2024 00:33:46.933 read: IOPS=98, BW=98.8MiB/s (104MB/s)(1051MiB/10635msec) 00:33:46.933 slat (usec): min=27, max=2115.2k, avg=10038.46, stdev=90882.82 00:33:46.933 clat (msec): min=79, max=4892, avg=1216.75, stdev=1257.30 00:33:46.933 lat (msec): min=525, max=4894, avg=1226.79, stdev=1260.13 00:33:46.933 clat percentiles (msec): 00:33:46.933 | 1.00th=[ 527], 5.00th=[ 550], 10.00th=[ 567], 20.00th=[ 584], 00:33:46.933 | 30.00th=[ 667], 40.00th=[ 709], 50.00th=[ 726], 60.00th=[ 743], 00:33:46.933 | 70.00th=[ 911], 80.00th=[ 1116], 90.00th=[ 4463], 95.00th=[ 4665], 00:33:46.933 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:33:46.933 | 99.99th=[ 4866] 00:33:46.933 bw ( KiB/s): min=14336, max=241664, per=4.21%, avg=157495.67, stdev=64932.31, samples=12 00:33:46.933 iops : min= 14, max= 236, avg=153.75, stdev=63.39, samples=12 00:33:46.933 lat (msec) : 100=0.10%, 750=61.27%, 1000=13.42%, 2000=12.46%, >=2000=12.75% 00:33:46.933 cpu : usr=0.06%, sys=1.18%, ctx=2005, majf=0, minf=32769 00:33:46.933 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:33:46.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.933 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.933 issued rwts: total=1051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.933 job4: (groupid=0, jobs=1): err= 0: pid=2296640: Tue Jun 11 14:01:38 2024 00:33:46.933 read: IOPS=59, BW=59.5MiB/s (62.4MB/s)(645MiB/10843msec) 00:33:46.933 slat (usec): min=31, max=4215.5k, avg=16687.18, stdev=166207.20 00:33:46.933 clat (msec): min=76, max=5408, avg=1994.78, stdev=1508.84 00:33:46.933 lat (msec): min=539, max=5413, avg=2011.47, stdev=1509.48 00:33:46.933 clat percentiles (msec): 00:33:46.933 | 1.00th=[ 542], 5.00th=[ 659], 10.00th=[ 835], 20.00th=[ 969], 00:33:46.933 | 30.00th=[ 1045], 40.00th=[ 1133], 50.00th=[ 1250], 60.00th=[ 1620], 00:33:46.933 | 70.00th=[ 1955], 80.00th=[ 2299], 90.00th=[ 4933], 95.00th=[ 5201], 00:33:46.933 | 99.00th=[ 5336], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:33:46.933 | 99.99th=[ 5403] 00:33:46.933 bw ( KiB/s): min= 2048, max=171688, per=2.57%, avg=96224.73, stdev=53638.82, samples=11 00:33:46.933 iops : min= 2, max= 167, avg=93.91, stdev=52.29, samples=11 00:33:46.933 lat (msec) : 100=0.16%, 750=7.29%, 1000=14.73%, 2000=49.77%, >=2000=28.06% 00:33:46.933 cpu : usr=0.00%, sys=1.38%, ctx=1974, majf=0, minf=32769 00:33:46.933 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:33:46.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.933 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:33:46.933 issued rwts: total=645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.933 job4: (groupid=0, jobs=1): err= 0: pid=2296641: Tue Jun 11 14:01:38 2024 00:33:46.933 read: IOPS=429, BW=429MiB/s (450MB/s)(4627MiB/10782msec) 00:33:46.933 slat (usec): min=23, max=2048.1k, avg=2309.32, stdev=30308.85 00:33:46.933 clat (msec): min=77, max=2316, avg=287.82, stdev=365.12 00:33:46.933 lat (msec): min=99, max=2317, avg=290.13, stdev=366.56 00:33:46.933 clat percentiles (msec): 00:33:46.933 | 1.00th=[ 101], 5.00th=[ 102], 10.00th=[ 102], 20.00th=[ 102], 00:33:46.933 | 30.00th=[ 103], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 300], 00:33:46.933 | 70.00th=[ 317], 80.00th=[ 401], 90.00th=[ 502], 95.00th=[ 523], 00:33:46.933 | 99.00th=[ 2265], 99.50th=[ 2299], 99.90th=[ 2299], 99.95th=[ 2299], 00:33:46.933 | 99.99th=[ 2333] 00:33:46.933 bw ( KiB/s): min=139264, max=1282048, per=14.49%, avg=541952.65, stdev=427142.70, samples=17 00:33:46.933 iops : min= 136, max= 1252, avg=529.24, stdev=417.14, samples=17 00:33:46.933 lat (msec) : 100=0.69%, 250=53.17%, 500=34.77%, 750=8.62%, >=2000=2.74% 00:33:46.933 cpu : usr=0.25%, sys=3.69%, ctx=4406, majf=0, minf=32769 00:33:46.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:33:46.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.933 issued rwts: total=4627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.933 job4: (groupid=0, jobs=1): err= 0: pid=2296642: Tue Jun 11 14:01:38 2024 00:33:46.933 read: IOPS=185, BW=185MiB/s (194MB/s)(1857MiB/10023msec) 00:33:46.933 slat (usec): min=202, max=72425, avg=5378.88, stdev=4520.13 00:33:46.933 clat (msec): min=21, max=1078, avg=662.62, stdev=211.28 00:33:46.933 lat (msec): min=25, max=1092, avg=668.00, stdev=212.58 00:33:46.933 clat percentiles (msec): 00:33:46.933 | 1.00th=[ 80], 5.00th=[ 321], 10.00th=[ 414], 20.00th=[ 426], 00:33:46.933 | 30.00th=[ 617], 40.00th=[ 625], 50.00th=[ 634], 60.00th=[ 684], 00:33:46.933 | 70.00th=[ 793], 80.00th=[ 844], 90.00th=[ 986], 95.00th=[ 1011], 00:33:46.933 | 99.00th=[ 1053], 99.50th=[ 1070], 99.90th=[ 1083], 99.95th=[ 1083], 00:33:46.933 | 99.99th=[ 1083] 00:33:46.933 bw ( KiB/s): min=57229, max=313344, per=4.81%, avg=179793.06, stdev=55536.18, samples=18 00:33:46.933 iops : min= 55, max= 306, avg=175.39, stdev=54.34, samples=18 00:33:46.933 lat (msec) : 50=0.59%, 100=0.70%, 250=2.48%, 500=17.50%, 750=47.28% 00:33:46.933 lat (msec) : 1000=24.82%, 2000=6.62% 00:33:46.933 cpu : usr=0.10%, sys=3.94%, ctx=3626, majf=0, minf=32769 00:33:46.933 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:33:46.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.933 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.933 issued rwts: total=1857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.933 job4: (groupid=0, jobs=1): err= 0: pid=2296643: Tue Jun 11 14:01:38 2024 00:33:46.933 read: IOPS=44, BW=44.5MiB/s (46.7MB/s)(471MiB/10585msec) 00:33:46.933 slat (usec): min=23, max=4212.5k, avg=22351.37, stdev=194578.14 00:33:46.933 clat (msec): min=55, max=6098, avg=2582.75, stdev=1673.33 00:33:46.933 lat (msec): min=1270, max=6099, avg=2605.10, stdev=1671.41 00:33:46.933 clat percentiles (msec): 00:33:46.933 | 1.00th=[ 1318], 5.00th=[ 1351], 10.00th=[ 1385], 20.00th=[ 1485], 00:33:46.933 | 30.00th=[ 1569], 40.00th=[ 1620], 50.00th=[ 1670], 60.00th=[ 1720], 00:33:46.933 | 70.00th=[ 1854], 80.00th=[ 4799], 90.00th=[ 5537], 95.00th=[ 5940], 00:33:46.933 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:33:46.933 | 99.99th=[ 6074] 00:33:46.933 bw ( KiB/s): min=40960, max=120832, per=2.09%, avg=78051.56, stdev=25792.70, samples=9 00:33:46.933 iops : min= 40, max= 118, avg=76.22, stdev=25.19, samples=9 00:33:46.933 lat (msec) : 100=0.21%, 2000=72.61%, >=2000=27.18% 00:33:46.933 cpu : usr=0.00%, sys=0.70%, ctx=1556, majf=0, minf=32769 00:33:46.933 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.6% 00:33:46.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.933 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:33:46.933 issued rwts: total=471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.933 job4: (groupid=0, jobs=1): err= 0: pid=2296644: Tue Jun 11 14:01:38 2024 00:33:46.934 read: IOPS=94, BW=94.6MiB/s (99.2MB/s)(1032MiB/10913msec) 00:33:46.934 slat (usec): min=25, max=1921.7k, avg=10494.49, stdev=92995.91 00:33:46.934 clat (msec): min=77, max=4545, avg=1242.20, stdev=1069.92 00:33:46.934 lat (msec): min=468, max=4547, avg=1252.69, stdev=1073.63 00:33:46.934 clat percentiles (msec): 00:33:46.934 | 1.00th=[ 481], 5.00th=[ 523], 10.00th=[ 609], 20.00th=[ 735], 00:33:46.934 | 30.00th=[ 743], 40.00th=[ 785], 50.00th=[ 810], 60.00th=[ 835], 00:33:46.934 | 70.00th=[ 885], 80.00th=[ 1519], 90.00th=[ 2635], 95.00th=[ 4463], 00:33:46.934 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:33:46.934 | 99.99th=[ 4530] 00:33:46.934 bw ( KiB/s): min=14336, max=258048, per=3.81%, avg=142316.38, stdev=71331.07, samples=13 00:33:46.934 iops : min= 14, max= 252, avg=138.77, stdev=69.80, samples=13 00:33:46.934 lat (msec) : 100=0.10%, 500=2.52%, 750=30.91%, 1000=41.18%, 2000=12.31% 00:33:46.934 lat (msec) : >=2000=12.98% 00:33:46.934 cpu : usr=0.05%, sys=2.06%, ctx=978, majf=0, minf=32769 00:33:46.934 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:33:46.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.934 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.934 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.934 job4: (groupid=0, jobs=1): err= 0: pid=2296645: Tue Jun 11 14:01:38 2024 00:33:46.934 read: IOPS=63, BW=64.0MiB/s (67.1MB/s)(683MiB/10676msec) 00:33:46.934 slat (usec): min=33, max=2167.0k, avg=15516.54, stdev=115775.49 00:33:46.934 clat (msec): min=75, max=5815, avg=1925.77, stdev=1623.02 00:33:46.934 lat (msec): min=423, max=5821, avg=1941.29, stdev=1625.56 00:33:46.934 clat percentiles (msec): 00:33:46.934 | 1.00th=[ 426], 5.00th=[ 460], 10.00th=[ 558], 20.00th=[ 760], 00:33:46.934 | 30.00th=[ 944], 40.00th=[ 1368], 50.00th=[ 1485], 60.00th=[ 1536], 00:33:46.934 | 70.00th=[ 1653], 80.00th=[ 1770], 90.00th=[ 5201], 95.00th=[ 5537], 00:33:46.934 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:33:46.934 | 99.99th=[ 5805] 00:33:46.934 bw ( KiB/s): min= 4087, max=165888, per=2.34%, avg=87385.54, stdev=53968.76, samples=13 00:33:46.934 iops : min= 3, max= 162, avg=85.15, stdev=52.69, samples=13 00:33:46.934 lat (msec) : 100=0.15%, 500=7.03%, 750=11.27%, 1000=13.62%, 2000=49.05% 00:33:46.934 lat (msec) : >=2000=18.89% 00:33:46.934 cpu : usr=0.04%, sys=1.12%, ctx=1775, majf=0, minf=32769 00:33:46.934 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:33:46.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.934 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:33:46.934 issued rwts: total=683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.934 job5: (groupid=0, jobs=1): err= 0: pid=2296655: Tue Jun 11 14:01:38 2024 00:33:46.934 read: IOPS=5, BW=5931KiB/s (6073kB/s)(63.0MiB/10877msec) 00:33:46.934 slat (usec): min=627, max=2133.8k, avg=171356.33, stdev=565689.34 00:33:46.934 clat (msec): min=80, max=10875, avg=9336.13, stdev=2916.69 00:33:46.934 lat (msec): min=2132, max=10876, avg=9507.49, stdev=2670.93 00:33:46.934 clat percentiles (msec): 00:33:46.934 | 1.00th=[ 82], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 8557], 00:33:46.934 | 30.00th=[10671], 40.00th=[10671], 50.00th=[10805], 60.00th=[10805], 00:33:46.934 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:33:46.934 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:33:46.934 | 99.99th=[10939] 00:33:46.934 lat (msec) : 100=1.59%, >=2000=98.41% 00:33:46.934 cpu : usr=0.00%, sys=1.03%, ctx=110, majf=0, minf=16129 00:33:46.934 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:33:46.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.934 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:33:46.934 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.934 job5: (groupid=0, jobs=1): err= 0: pid=2296656: Tue Jun 11 14:01:38 2024 00:33:46.934 read: IOPS=38, BW=38.3MiB/s (40.2MB/s)(385MiB/10043msec) 00:33:46.934 slat (usec): min=93, max=1986.4k, avg=26001.10, stdev=104130.91 00:33:46.934 clat (msec): min=30, max=5396, avg=2260.37, stdev=1096.52 00:33:46.934 lat (msec): min=48, max=5411, avg=2286.37, stdev=1107.86 00:33:46.934 clat percentiles (msec): 00:33:46.934 | 1.00th=[ 64], 5.00th=[ 485], 10.00th=[ 953], 20.00th=[ 1519], 00:33:46.934 | 30.00th=[ 1687], 40.00th=[ 1787], 50.00th=[ 2039], 60.00th=[ 2198], 00:33:46.934 | 70.00th=[ 3104], 80.00th=[ 3239], 90.00th=[ 3473], 95.00th=[ 3574], 00:33:46.934 | 99.00th=[ 5336], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:33:46.934 | 99.99th=[ 5403] 00:33:46.934 bw ( KiB/s): min= 2043, max=100352, per=1.29%, avg=48326.90, stdev=30323.54, samples=10 00:33:46.934 iops : min= 1, max= 98, avg=47.00, stdev=29.86, samples=10 00:33:46.934 lat (msec) : 50=0.52%, 100=0.78%, 250=1.82%, 500=2.60%, 750=2.08% 00:33:46.934 lat (msec) : 1000=3.64%, 2000=36.62%, >=2000=51.95% 00:33:46.934 cpu : usr=0.02%, sys=1.14%, ctx=983, majf=0, minf=32769 00:33:46.934 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:33:46.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.934 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:33:46.934 issued rwts: total=385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.934 job5: (groupid=0, jobs=1): err= 0: pid=2296657: Tue Jun 11 14:01:38 2024 00:33:46.934 read: IOPS=43, BW=43.5MiB/s (45.7MB/s)(463MiB/10635msec) 00:33:46.934 slat (usec): min=33, max=4127.2k, avg=22789.89, stdev=213514.44 00:33:46.934 clat (msec): min=80, max=7438, avg=2754.68, stdev=2538.86 00:33:46.934 lat (msec): min=842, max=7440, avg=2777.47, stdev=2541.04 00:33:46.934 clat percentiles (msec): 00:33:46.934 | 1.00th=[ 852], 5.00th=[ 869], 10.00th=[ 885], 20.00th=[ 919], 00:33:46.934 | 30.00th=[ 1099], 40.00th=[ 1250], 50.00th=[ 1418], 60.00th=[ 1536], 00:33:46.934 | 70.00th=[ 1653], 80.00th=[ 6678], 90.00th=[ 7013], 95.00th=[ 7148], 00:33:46.934 | 99.00th=[ 7349], 99.50th=[ 7349], 99.90th=[ 7416], 99.95th=[ 7416], 00:33:46.934 | 99.99th=[ 7416] 00:33:46.934 bw ( KiB/s): min= 4096, max=131072, per=2.04%, avg=76210.67, stdev=49110.20, samples=9 00:33:46.934 iops : min= 4, max= 128, avg=74.33, stdev=47.93, samples=9 00:33:46.934 lat (msec) : 100=0.22%, 1000=27.43%, 2000=44.06%, >=2000=28.29% 00:33:46.934 cpu : usr=0.01%, sys=0.80%, ctx=961, majf=0, minf=32769 00:33:46.934 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.4% 00:33:46.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.934 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:33:46.934 issued rwts: total=463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.934 job5: (groupid=0, jobs=1): err= 0: pid=2296658: Tue Jun 11 14:01:38 2024 00:33:46.934 read: IOPS=9, BW=9641KiB/s (9873kB/s)(100MiB/10621msec) 00:33:46.934 slat (usec): min=634, max=2112.6k, avg=99998.74, stdev=407673.79 00:33:46.934 clat (msec): min=620, max=10618, avg=2661.78, stdev=3183.12 00:33:46.934 lat (msec): min=622, max=10620, avg=2761.77, stdev=3274.13 00:33:46.934 clat percentiles (msec): 00:33:46.934 | 1.00th=[ 617], 5.00th=[ 625], 10.00th=[ 659], 20.00th=[ 701], 00:33:46.934 | 30.00th=[ 902], 40.00th=[ 1234], 50.00th=[ 1452], 60.00th=[ 1687], 00:33:46.934 | 70.00th=[ 1921], 80.00th=[ 2072], 90.00th=[10537], 95.00th=[10537], 00:33:46.934 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:33:46.934 | 99.99th=[10671] 00:33:46.934 lat (msec) : 750=20.00%, 1000=13.00%, 2000=41.00%, >=2000=26.00% 00:33:46.934 cpu : usr=0.00%, sys=0.49%, ctx=287, majf=0, minf=25601 00:33:46.934 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.0%, 16=16.0%, 32=32.0%, >=64=37.0% 00:33:46.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.934 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:33:46.934 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.934 job5: (groupid=0, jobs=1): err= 0: pid=2296659: Tue Jun 11 14:01:38 2024 00:33:46.934 read: IOPS=47, BW=47.0MiB/s (49.3MB/s)(496MiB/10544msec) 00:33:46.934 slat (usec): min=24, max=1818.3k, avg=20161.97, stdev=89314.58 00:33:46.934 clat (msec): min=540, max=4619, avg=2510.09, stdev=1169.42 00:33:46.934 lat (msec): min=555, max=4644, avg=2530.25, stdev=1172.54 00:33:46.934 clat percentiles (msec): 00:33:46.934 | 1.00th=[ 558], 5.00th=[ 592], 10.00th=[ 676], 20.00th=[ 1418], 00:33:46.934 | 30.00th=[ 1787], 40.00th=[ 1972], 50.00th=[ 2601], 60.00th=[ 2937], 00:33:46.934 | 70.00th=[ 3205], 80.00th=[ 3842], 90.00th=[ 4044], 95.00th=[ 4245], 00:33:46.934 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:33:46.934 | 99.99th=[ 4597] 00:33:46.934 bw ( KiB/s): min= 2048, max=151552, per=1.20%, avg=44791.81, stdev=41445.41, samples=16 00:33:46.934 iops : min= 2, max= 148, avg=43.69, stdev=40.41, samples=16 00:33:46.934 lat (msec) : 750=10.69%, 1000=1.41%, 2000=31.05%, >=2000=56.85% 00:33:46.934 cpu : usr=0.03%, sys=1.14%, ctx=1378, majf=0, minf=32769 00:33:46.934 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.3% 00:33:46.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.934 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:33:46.934 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.934 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.934 job5: (groupid=0, jobs=1): err= 0: pid=2296660: Tue Jun 11 14:01:38 2024 00:33:46.934 read: IOPS=90, BW=90.5MiB/s (94.9MB/s)(907MiB/10024msec) 00:33:46.934 slat (usec): min=25, max=2104.2k, avg=11022.64, stdev=116135.36 00:33:46.934 clat (msec): min=22, max=6916, avg=854.76, stdev=1492.25 00:33:46.934 lat (msec): min=26, max=6928, avg=865.78, stdev=1505.85 00:33:46.934 clat percentiles (msec): 00:33:46.934 | 1.00th=[ 58], 5.00th=[ 251], 10.00th=[ 292], 20.00th=[ 376], 00:33:46.934 | 30.00th=[ 439], 40.00th=[ 472], 50.00th=[ 510], 60.00th=[ 535], 00:33:46.934 | 70.00th=[ 550], 80.00th=[ 625], 90.00th=[ 651], 95.00th=[ 6745], 00:33:46.934 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6946], 99.95th=[ 6946], 00:33:46.934 | 99.99th=[ 6946] 00:33:46.934 bw ( KiB/s): min=49152, max=327025, per=6.11%, avg=228584.17, stdev=101851.73, samples=6 00:33:46.934 iops : min= 48, max= 319, avg=223.17, stdev=99.40, samples=6 00:33:46.934 lat (msec) : 50=0.88%, 100=0.88%, 250=3.20%, 500=42.56%, 750=45.87% 00:33:46.934 lat (msec) : >=2000=6.62% 00:33:46.935 cpu : usr=0.02%, sys=1.75%, ctx=1752, majf=0, minf=32769 00:33:46.935 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:33:46.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.935 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.935 issued rwts: total=907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.935 job5: (groupid=0, jobs=1): err= 0: pid=2296661: Tue Jun 11 14:01:38 2024 00:33:46.935 read: IOPS=85, BW=85.7MiB/s (89.8MB/s)(911MiB/10634msec) 00:33:46.935 slat (usec): min=33, max=1949.0k, avg=11625.67, stdev=81619.59 00:33:46.935 clat (msec): min=36, max=6433, avg=1419.55, stdev=1630.07 00:33:46.935 lat (msec): min=332, max=7137, avg=1431.18, stdev=1638.98 00:33:46.935 clat percentiles (msec): 00:33:46.935 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 368], 20.00th=[ 422], 00:33:46.935 | 30.00th=[ 443], 40.00th=[ 464], 50.00th=[ 609], 60.00th=[ 735], 00:33:46.935 | 70.00th=[ 785], 80.00th=[ 2702], 90.00th=[ 4212], 95.00th=[ 5470], 00:33:46.935 | 99.00th=[ 6074], 99.50th=[ 6208], 99.90th=[ 6409], 99.95th=[ 6409], 00:33:46.935 | 99.99th=[ 6409] 00:33:46.935 bw ( KiB/s): min= 6144, max=303104, per=3.06%, avg=114532.43, stdev=105050.76, samples=14 00:33:46.935 iops : min= 6, max= 296, avg=111.79, stdev=102.53, samples=14 00:33:46.935 lat (msec) : 50=0.11%, 500=45.12%, 750=19.65%, 1000=6.48%, 2000=4.83% 00:33:46.935 lat (msec) : >=2000=23.82% 00:33:46.935 cpu : usr=0.07%, sys=1.54%, ctx=1205, majf=0, minf=32769 00:33:46.935 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:33:46.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.935 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.935 issued rwts: total=911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.935 job5: (groupid=0, jobs=1): err= 0: pid=2296662: Tue Jun 11 14:01:38 2024 00:33:46.935 read: IOPS=80, BW=80.9MiB/s (84.8MB/s)(811MiB/10025msec) 00:33:46.935 slat (usec): min=29, max=2101.3k, avg=12329.91, stdev=124954.30 00:33:46.935 clat (msec): min=21, max=6857, avg=1023.42, stdev=1624.71 00:33:46.935 lat (msec): min=24, max=6862, avg=1035.75, stdev=1637.41 00:33:46.935 clat percentiles (msec): 00:33:46.935 | 1.00th=[ 37], 5.00th=[ 157], 10.00th=[ 342], 20.00th=[ 405], 00:33:46.935 | 30.00th=[ 472], 40.00th=[ 550], 50.00th=[ 592], 60.00th=[ 625], 00:33:46.935 | 70.00th=[ 684], 80.00th=[ 760], 90.00th=[ 869], 95.00th=[ 6812], 00:33:46.935 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:33:46.935 | 99.99th=[ 6879] 00:33:46.935 bw ( KiB/s): min=90112, max=342016, per=5.19%, avg=194218.67, stdev=85039.28, samples=6 00:33:46.935 iops : min= 88, max= 334, avg=189.67, stdev=83.05, samples=6 00:33:46.935 lat (msec) : 50=1.73%, 100=1.73%, 250=3.70%, 500=25.03%, 750=46.61% 00:33:46.935 lat (msec) : 1000=12.70%, >=2000=8.51% 00:33:46.935 cpu : usr=0.05%, sys=1.82%, ctx=1755, majf=0, minf=32769 00:33:46.935 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.2% 00:33:46.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.935 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.935 issued rwts: total=811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.935 job5: (groupid=0, jobs=1): err= 0: pid=2296664: Tue Jun 11 14:01:38 2024 00:33:46.935 read: IOPS=14, BW=14.6MiB/s (15.3MB/s)(159MiB/10923msec) 00:33:46.935 slat (usec): min=257, max=2121.1k, avg=68179.09, stdev=326708.59 00:33:46.935 clat (msec): min=81, max=10802, avg=8009.34, stdev=2816.19 00:33:46.935 lat (msec): min=2137, max=10806, avg=8077.52, stdev=2752.07 00:33:46.935 clat percentiles (msec): 00:33:46.935 | 1.00th=[ 2140], 5.00th=[ 3004], 10.00th=[ 3104], 20.00th=[ 4396], 00:33:46.935 | 30.00th=[ 7819], 40.00th=[ 8154], 50.00th=[ 8423], 60.00th=[ 8658], 00:33:46.935 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10671], 95.00th=[10805], 00:33:46.935 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:33:46.935 | 99.99th=[10805] 00:33:46.935 bw ( KiB/s): min= 2048, max=40960, per=0.34%, avg=12697.60, stdev=16281.28, samples=5 00:33:46.935 iops : min= 2, max= 40, avg=12.40, stdev=15.90, samples=5 00:33:46.935 lat (msec) : 100=0.63%, >=2000=99.37% 00:33:46.935 cpu : usr=0.00%, sys=1.20%, ctx=292, majf=0, minf=32769 00:33:46.935 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=5.0%, 16=10.1%, 32=20.1%, >=64=60.4% 00:33:46.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.935 complete : 0=0.0%, 4=97.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.0% 00:33:46.935 issued rwts: total=159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.935 job5: (groupid=0, jobs=1): err= 0: pid=2296665: Tue Jun 11 14:01:38 2024 00:33:46.935 read: IOPS=25, BW=25.9MiB/s (27.2MB/s)(262MiB/10098msec) 00:33:46.935 slat (usec): min=80, max=1970.5k, avg=38197.72, stdev=153201.25 00:33:46.935 clat (msec): min=88, max=7032, avg=3422.48, stdev=1777.50 00:33:46.935 lat (msec): min=119, max=7182, avg=3460.67, stdev=1790.18 00:33:46.935 clat percentiles (msec): 00:33:46.935 | 1.00th=[ 230], 5.00th=[ 468], 10.00th=[ 1167], 20.00th=[ 1888], 00:33:46.935 | 30.00th=[ 2467], 40.00th=[ 3104], 50.00th=[ 3306], 60.00th=[ 3406], 00:33:46.935 | 70.00th=[ 3608], 80.00th=[ 5000], 90.00th=[ 6409], 95.00th=[ 6678], 00:33:46.935 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:33:46.935 | 99.99th=[ 7013] 00:33:46.935 bw ( KiB/s): min=14336, max=55296, per=0.93%, avg=34816.00, stdev=14090.08, samples=7 00:33:46.935 iops : min= 14, max= 54, avg=34.00, stdev=13.76, samples=7 00:33:46.935 lat (msec) : 100=0.38%, 250=1.15%, 500=3.82%, 750=1.91%, 1000=1.15% 00:33:46.935 lat (msec) : 2000=14.12%, >=2000=77.48% 00:33:46.935 cpu : usr=0.03%, sys=1.31%, ctx=837, majf=0, minf=32769 00:33:46.935 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.1%, 32=12.2%, >=64=76.0% 00:33:46.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.935 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:33:46.935 issued rwts: total=262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.935 job5: (groupid=0, jobs=1): err= 0: pid=2296666: Tue Jun 11 14:01:38 2024 00:33:46.935 read: IOPS=28, BW=28.6MiB/s (30.0MB/s)(304MiB/10615msec) 00:33:46.935 slat (usec): min=28, max=2107.8k, avg=34645.73, stdev=229527.54 00:33:46.935 clat (msec): min=80, max=7413, avg=1644.90, stdev=1141.34 00:33:46.935 lat (msec): min=702, max=7419, avg=1679.54, stdev=1202.29 00:33:46.935 clat percentiles (msec): 00:33:46.935 | 1.00th=[ 701], 5.00th=[ 743], 10.00th=[ 751], 20.00th=[ 785], 00:33:46.935 | 30.00th=[ 810], 40.00th=[ 827], 50.00th=[ 835], 60.00th=[ 2232], 00:33:46.935 | 70.00th=[ 2400], 80.00th=[ 2567], 90.00th=[ 2769], 95.00th=[ 2836], 00:33:46.935 | 99.00th=[ 6409], 99.50th=[ 7416], 99.90th=[ 7416], 99.95th=[ 7416], 00:33:46.935 | 99.99th=[ 7416] 00:33:46.935 bw ( KiB/s): min=14336, max=161792, per=2.41%, avg=90112.00, stdev=79477.18, samples=4 00:33:46.935 iops : min= 14, max= 158, avg=88.00, stdev=77.61, samples=4 00:33:46.935 lat (msec) : 100=0.33%, 750=7.24%, 1000=47.37%, 2000=1.64%, >=2000=43.42% 00:33:46.935 cpu : usr=0.02%, sys=0.75%, ctx=346, majf=0, minf=32769 00:33:46.935 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.5%, >=64=79.3% 00:33:46.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.935 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:33:46.935 issued rwts: total=304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.935 job5: (groupid=0, jobs=1): err= 0: pid=2296667: Tue Jun 11 14:01:38 2024 00:33:46.935 read: IOPS=130, BW=130MiB/s (136MB/s)(1304MiB/10023msec) 00:33:46.935 slat (usec): min=27, max=101047, avg=7664.66, stdev=11833.34 00:33:46.935 clat (msec): min=22, max=2201, avg=925.81, stdev=434.33 00:33:46.935 lat (msec): min=24, max=2210, avg=933.47, stdev=436.64 00:33:46.935 clat percentiles (msec): 00:33:46.935 | 1.00th=[ 67], 5.00th=[ 296], 10.00th=[ 584], 20.00th=[ 709], 00:33:46.935 | 30.00th=[ 768], 40.00th=[ 802], 50.00th=[ 827], 60.00th=[ 835], 00:33:46.935 | 70.00th=[ 877], 80.00th=[ 1045], 90.00th=[ 1636], 95.00th=[ 1972], 00:33:46.935 | 99.00th=[ 2140], 99.50th=[ 2165], 99.90th=[ 2198], 99.95th=[ 2198], 00:33:46.935 | 99.99th=[ 2198] 00:33:46.935 bw ( KiB/s): min=43008, max=212992, per=3.45%, avg=129084.71, stdev=54495.71, samples=17 00:33:46.935 iops : min= 42, max= 208, avg=125.94, stdev=53.13, samples=17 00:33:46.935 lat (msec) : 50=0.77%, 100=0.84%, 250=2.61%, 500=3.60%, 750=19.10% 00:33:46.935 lat (msec) : 1000=50.84%, 2000=17.79%, >=2000=4.45% 00:33:46.935 cpu : usr=0.03%, sys=2.43%, ctx=2108, majf=0, minf=32769 00:33:46.935 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.2% 00:33:46.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.935 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.936 issued rwts: total=1304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.936 job5: (groupid=0, jobs=1): err= 0: pid=2296668: Tue Jun 11 14:01:38 2024 00:33:46.936 read: IOPS=160, BW=161MiB/s (169MB/s)(1610MiB/10011msec) 00:33:46.936 slat (usec): min=23, max=2099.5k, avg=6207.78, stdev=102726.58 00:33:46.936 clat (msec): min=10, max=8808, avg=557.43, stdev=1833.73 00:33:46.936 lat (msec): min=11, max=8809, avg=563.63, stdev=1845.19 00:33:46.936 clat percentiles (msec): 00:33:46.936 | 1.00th=[ 24], 5.00th=[ 77], 10.00th=[ 102], 20.00th=[ 102], 00:33:46.936 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 103], 60.00th=[ 103], 00:33:46.936 | 70.00th=[ 103], 80.00th=[ 103], 90.00th=[ 104], 95.00th=[ 4396], 00:33:46.936 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:33:46.936 | 99.99th=[ 8792] 00:33:46.936 bw ( KiB/s): min=520192, max=1275904, per=24.01%, avg=898048.00, stdev=534369.08, samples=2 00:33:46.936 iops : min= 508, max= 1246, avg=877.00, stdev=521.84, samples=2 00:33:46.936 lat (msec) : 20=0.75%, 50=2.24%, 100=3.98%, 250=86.46%, >=2000=6.58% 00:33:46.936 cpu : usr=0.06%, sys=2.34%, ctx=1558, majf=0, minf=32769 00:33:46.936 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:33:46.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.936 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:46.936 issued rwts: total=1610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:46.936 00:33:46.936 Run status group 0 (all jobs): 00:33:46.936 READ: bw=3653MiB/s (3830MB/s), 882KiB/s-429MiB/s (903kB/s-450MB/s), io=46.4GiB (49.8GB), run=10009-13005msec 00:33:46.936 00:33:46.936 Disk stats (read/write): 00:33:46.936 nvme0n1: ios=18384/0, merge=0/0, ticks=6509742/0, in_queue=6509742, util=98.51% 00:33:46.936 nvme1n1: ios=34448/0, merge=0/0, ticks=7200171/0, in_queue=7200171, util=98.71% 00:33:46.936 nvme2n1: ios=73677/0, merge=0/0, ticks=8557472/0, in_queue=8557472, util=98.86% 00:33:46.936 nvme3n1: ios=61806/0, merge=0/0, ticks=6832127/0, in_queue=6832127, util=98.73% 00:33:46.936 nvme4n1: ios=128502/0, merge=0/0, ticks=7547261/0, in_queue=7547261, util=98.81% 00:33:46.936 nvme5n1: ios=62049/0, merge=0/0, ticks=6635490/0, in_queue=6635490, util=99.23% 00:33:46.936 14:01:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:33:46.936 14:01:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:33:46.936 14:01:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:33:46.936 14:01:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:33:46.936 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000000 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000000 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:33:46.936 14:01:39 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:48.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000001 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000001 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:33:48.321 14:01:41 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:33:49.705 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000002 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000002 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:33:49.705 14:01:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:33:51.089 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000003 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000003 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:33:51.089 14:01:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:33:52.471 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000004 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000004 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:33:52.471 14:01:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:33:53.854 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000005 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000005 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:33:53.854 rmmod nvme_rdma 00:33:53.854 rmmod nvme_fabrics 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 2294289 ']' 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 2294289 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@949 -- # '[' -z 2294289 ']' 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # kill -0 2294289 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # uname 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2294289 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2294289' 00:33:53.854 killing process with pid 2294289 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # kill 2294289 00:33:53.854 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # wait 2294289 00:33:54.114 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:54.114 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:33:54.114 00:33:54.114 real 0m40.179s 00:33:54.114 user 2m27.220s 00:33:54.114 sys 0m17.499s 00:33:54.114 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:54.114 14:01:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:33:54.114 ************************************ 00:33:54.114 END TEST nvmf_srq_overwhelm 00:33:54.114 ************************************ 00:33:54.114 14:01:46 nvmf_rdma -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:33:54.114 14:01:46 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:54.114 14:01:46 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:54.114 14:01:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:33:54.114 ************************************ 00:33:54.114 START TEST nvmf_shutdown 00:33:54.114 ************************************ 00:33:54.114 14:01:47 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:33:54.375 * Looking for test storage... 00:33:54.375 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:54.375 ************************************ 00:33:54.375 START TEST nvmf_shutdown_tc1 00:33:54.375 ************************************ 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:33:54.375 14:01:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:34:02.541 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:34:02.541 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:34:02.541 Found net devices under 0000:98:00.0: mlx_0_0 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:34:02.541 Found net devices under 0000:98:00.1: mlx_0_1 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:02.541 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:34:02.542 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:02.542 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:34:02.542 altname enp152s0f0np0 00:34:02.542 altname ens817f0np0 00:34:02.542 inet 192.168.100.8/24 scope global mlx_0_0 00:34:02.542 valid_lft forever preferred_lft forever 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:34:02.542 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:02.542 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:34:02.542 altname enp152s0f1np1 00:34:02.542 altname ens817f1np1 00:34:02.542 inet 192.168.100.9/24 scope global mlx_0_1 00:34:02.542 valid_lft forever preferred_lft forever 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:34:02.542 192.168.100.9' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:34:02.542 192.168.100.9' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:34:02.542 192.168.100.9' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:34:02.542 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:02.543 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:02.543 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:02.543 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2304339 00:34:02.543 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2304339 00:34:02.543 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:34:02.543 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 2304339 ']' 00:34:02.543 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.543 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:02.543 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.543 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:02.543 14:01:54 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:02.543 [2024-06-11 14:01:54.528597] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:02.543 [2024-06-11 14:01:54.528667] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:02.543 EAL: No free 2048 kB hugepages reported on node 1 00:34:02.543 [2024-06-11 14:01:54.612570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:02.543 [2024-06-11 14:01:54.708128] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.543 [2024-06-11 14:01:54.708189] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.543 [2024-06-11 14:01:54.708198] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:02.543 [2024-06-11 14:01:54.708206] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:02.543 [2024-06-11 14:01:54.708213] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.543 [2024-06-11 14:01:54.708345] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:02.543 [2024-06-11 14:01:54.708515] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:02.543 [2024-06-11 14:01:54.708678] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.543 [2024-06-11 14:01:54.708680] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:34:02.543 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:02.543 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:34:02.543 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:02.543 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:02.543 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:02.543 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.543 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:02.543 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.543 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:02.543 [2024-06-11 14:01:55.399496] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6610d0/0x6655c0) succeed. 00:34:02.543 [2024-06-11 14:01:55.412498] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x662710/0x6a6c50) succeed. 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:02.887 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:02.888 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:02.888 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:02.888 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:02.888 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:02.888 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:02.888 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:02.888 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:02.888 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:34:02.888 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:34:02.888 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:02.888 14:01:55 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:02.888 Malloc1 00:34:02.888 [2024-06-11 14:01:55.639570] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:02.888 Malloc2 00:34:02.888 Malloc3 00:34:02.888 Malloc4 00:34:02.888 Malloc5 00:34:03.147 Malloc6 00:34:03.147 Malloc7 00:34:03.147 Malloc8 00:34:03.147 Malloc9 00:34:03.147 Malloc10 00:34:03.147 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:03.147 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:34:03.147 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:03.147 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:03.147 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2304727 00:34:03.147 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2304727 /var/tmp/bdevperf.sock 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 2304727 ']' 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:03.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.408 { 00:34:03.408 "params": { 00:34:03.408 "name": "Nvme$subsystem", 00:34:03.408 "trtype": "$TEST_TRANSPORT", 00:34:03.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.408 "adrfam": "ipv4", 00:34:03.408 "trsvcid": "$NVMF_PORT", 00:34:03.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.408 "hdgst": ${hdgst:-false}, 00:34:03.408 "ddgst": ${ddgst:-false} 00:34:03.408 }, 00:34:03.408 "method": "bdev_nvme_attach_controller" 00:34:03.408 } 00:34:03.408 EOF 00:34:03.408 )") 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.408 { 00:34:03.408 "params": { 00:34:03.408 "name": "Nvme$subsystem", 00:34:03.408 "trtype": "$TEST_TRANSPORT", 00:34:03.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.408 "adrfam": "ipv4", 00:34:03.408 "trsvcid": "$NVMF_PORT", 00:34:03.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.408 "hdgst": ${hdgst:-false}, 00:34:03.408 "ddgst": ${ddgst:-false} 00:34:03.408 }, 00:34:03.408 "method": "bdev_nvme_attach_controller" 00:34:03.408 } 00:34:03.408 EOF 00:34:03.408 )") 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.408 { 00:34:03.408 "params": { 00:34:03.408 "name": "Nvme$subsystem", 00:34:03.408 "trtype": "$TEST_TRANSPORT", 00:34:03.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.408 "adrfam": "ipv4", 00:34:03.408 "trsvcid": "$NVMF_PORT", 00:34:03.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.408 "hdgst": ${hdgst:-false}, 00:34:03.408 "ddgst": ${ddgst:-false} 00:34:03.408 }, 00:34:03.408 "method": "bdev_nvme_attach_controller" 00:34:03.408 } 00:34:03.408 EOF 00:34:03.408 )") 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.408 { 00:34:03.408 "params": { 00:34:03.408 "name": "Nvme$subsystem", 00:34:03.408 "trtype": "$TEST_TRANSPORT", 00:34:03.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.408 "adrfam": "ipv4", 00:34:03.408 "trsvcid": "$NVMF_PORT", 00:34:03.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.408 "hdgst": ${hdgst:-false}, 00:34:03.408 "ddgst": ${ddgst:-false} 00:34:03.408 }, 00:34:03.408 "method": "bdev_nvme_attach_controller" 00:34:03.408 } 00:34:03.408 EOF 00:34:03.408 )") 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.408 { 00:34:03.408 "params": { 00:34:03.408 "name": "Nvme$subsystem", 00:34:03.408 "trtype": "$TEST_TRANSPORT", 00:34:03.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.408 "adrfam": "ipv4", 00:34:03.408 "trsvcid": "$NVMF_PORT", 00:34:03.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.408 "hdgst": ${hdgst:-false}, 00:34:03.408 "ddgst": ${ddgst:-false} 00:34:03.408 }, 00:34:03.408 "method": "bdev_nvme_attach_controller" 00:34:03.408 } 00:34:03.408 EOF 00:34:03.408 )") 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.408 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.408 { 00:34:03.408 "params": { 00:34:03.408 "name": "Nvme$subsystem", 00:34:03.408 "trtype": "$TEST_TRANSPORT", 00:34:03.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.408 "adrfam": "ipv4", 00:34:03.408 "trsvcid": "$NVMF_PORT", 00:34:03.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.409 "hdgst": ${hdgst:-false}, 00:34:03.409 "ddgst": ${ddgst:-false} 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 } 00:34:03.409 EOF 00:34:03.409 )") 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:03.409 [2024-06-11 14:01:56.103437] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:03.409 [2024-06-11 14:01:56.103494] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.409 { 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme$subsystem", 00:34:03.409 "trtype": "$TEST_TRANSPORT", 00:34:03.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "$NVMF_PORT", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.409 "hdgst": ${hdgst:-false}, 00:34:03.409 "ddgst": ${ddgst:-false} 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 } 00:34:03.409 EOF 00:34:03.409 )") 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.409 { 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme$subsystem", 00:34:03.409 "trtype": "$TEST_TRANSPORT", 00:34:03.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "$NVMF_PORT", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.409 "hdgst": ${hdgst:-false}, 00:34:03.409 "ddgst": ${ddgst:-false} 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 } 00:34:03.409 EOF 00:34:03.409 )") 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.409 { 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme$subsystem", 00:34:03.409 "trtype": "$TEST_TRANSPORT", 00:34:03.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "$NVMF_PORT", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.409 "hdgst": ${hdgst:-false}, 00:34:03.409 "ddgst": ${ddgst:-false} 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 } 00:34:03.409 EOF 00:34:03.409 )") 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.409 { 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme$subsystem", 00:34:03.409 "trtype": "$TEST_TRANSPORT", 00:34:03.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "$NVMF_PORT", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.409 "hdgst": ${hdgst:-false}, 00:34:03.409 "ddgst": ${ddgst:-false} 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 } 00:34:03.409 EOF 00:34:03.409 )") 00:34:03.409 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:34:03.409 14:01:56 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme1", 00:34:03.409 "trtype": "rdma", 00:34:03.409 "traddr": "192.168.100.8", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "4420", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:03.409 "hdgst": false, 00:34:03.409 "ddgst": false 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 },{ 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme2", 00:34:03.409 "trtype": "rdma", 00:34:03.409 "traddr": "192.168.100.8", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "4420", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:03.409 "hdgst": false, 00:34:03.409 "ddgst": false 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 },{ 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme3", 00:34:03.409 "trtype": "rdma", 00:34:03.409 "traddr": "192.168.100.8", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "4420", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:03.409 "hdgst": false, 00:34:03.409 "ddgst": false 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 },{ 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme4", 00:34:03.409 "trtype": "rdma", 00:34:03.409 "traddr": "192.168.100.8", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "4420", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:03.409 "hdgst": false, 00:34:03.409 "ddgst": false 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 },{ 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme5", 00:34:03.409 "trtype": "rdma", 00:34:03.409 "traddr": "192.168.100.8", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "4420", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:03.409 "hdgst": false, 00:34:03.409 "ddgst": false 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 },{ 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme6", 00:34:03.409 "trtype": "rdma", 00:34:03.409 "traddr": "192.168.100.8", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "4420", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:03.409 "hdgst": false, 00:34:03.409 "ddgst": false 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 },{ 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme7", 00:34:03.409 "trtype": "rdma", 00:34:03.409 "traddr": "192.168.100.8", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "4420", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:03.409 "hdgst": false, 00:34:03.409 "ddgst": false 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 },{ 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme8", 00:34:03.409 "trtype": "rdma", 00:34:03.409 "traddr": "192.168.100.8", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "4420", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:03.409 "hdgst": false, 00:34:03.409 "ddgst": false 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 },{ 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme9", 00:34:03.409 "trtype": "rdma", 00:34:03.409 "traddr": "192.168.100.8", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "4420", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:03.409 "hdgst": false, 00:34:03.409 "ddgst": false 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 },{ 00:34:03.409 "params": { 00:34:03.409 "name": "Nvme10", 00:34:03.409 "trtype": "rdma", 00:34:03.409 "traddr": "192.168.100.8", 00:34:03.409 "adrfam": "ipv4", 00:34:03.409 "trsvcid": "4420", 00:34:03.409 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:03.409 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:03.409 "hdgst": false, 00:34:03.409 "ddgst": false 00:34:03.409 }, 00:34:03.409 "method": "bdev_nvme_attach_controller" 00:34:03.409 }' 00:34:03.409 [2024-06-11 14:01:56.164873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.409 [2024-06-11 14:01:56.230289] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.351 14:01:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:04.351 14:01:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:34:04.351 14:01:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:04.351 14:01:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:04.351 14:01:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:04.351 14:01:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:04.351 14:01:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2304727 00:34:04.351 14:01:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:34:04.351 14:01:57 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:34:05.292 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2304727 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2304339 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.292 { 00:34:05.292 "params": { 00:34:05.292 "name": "Nvme$subsystem", 00:34:05.292 "trtype": "$TEST_TRANSPORT", 00:34:05.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.292 "adrfam": "ipv4", 00:34:05.292 "trsvcid": "$NVMF_PORT", 00:34:05.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.292 "hdgst": ${hdgst:-false}, 00:34:05.292 "ddgst": ${ddgst:-false} 00:34:05.292 }, 00:34:05.292 "method": "bdev_nvme_attach_controller" 00:34:05.292 } 00:34:05.292 EOF 00:34:05.292 )") 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.292 { 00:34:05.292 "params": { 00:34:05.292 "name": "Nvme$subsystem", 00:34:05.292 "trtype": "$TEST_TRANSPORT", 00:34:05.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.292 "adrfam": "ipv4", 00:34:05.292 "trsvcid": "$NVMF_PORT", 00:34:05.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.292 "hdgst": ${hdgst:-false}, 00:34:05.292 "ddgst": ${ddgst:-false} 00:34:05.292 }, 00:34:05.292 "method": "bdev_nvme_attach_controller" 00:34:05.292 } 00:34:05.292 EOF 00:34:05.292 )") 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.292 { 00:34:05.292 "params": { 00:34:05.292 "name": "Nvme$subsystem", 00:34:05.292 "trtype": "$TEST_TRANSPORT", 00:34:05.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.292 "adrfam": "ipv4", 00:34:05.292 "trsvcid": "$NVMF_PORT", 00:34:05.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.292 "hdgst": ${hdgst:-false}, 00:34:05.292 "ddgst": ${ddgst:-false} 00:34:05.292 }, 00:34:05.292 "method": "bdev_nvme_attach_controller" 00:34:05.292 } 00:34:05.292 EOF 00:34:05.292 )") 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.292 { 00:34:05.292 "params": { 00:34:05.292 "name": "Nvme$subsystem", 00:34:05.292 "trtype": "$TEST_TRANSPORT", 00:34:05.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.292 "adrfam": "ipv4", 00:34:05.292 "trsvcid": "$NVMF_PORT", 00:34:05.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.292 "hdgst": ${hdgst:-false}, 00:34:05.292 "ddgst": ${ddgst:-false} 00:34:05.292 }, 00:34:05.292 "method": "bdev_nvme_attach_controller" 00:34:05.292 } 00:34:05.292 EOF 00:34:05.292 )") 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.292 { 00:34:05.292 "params": { 00:34:05.292 "name": "Nvme$subsystem", 00:34:05.292 "trtype": "$TEST_TRANSPORT", 00:34:05.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.292 "adrfam": "ipv4", 00:34:05.292 "trsvcid": "$NVMF_PORT", 00:34:05.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.292 "hdgst": ${hdgst:-false}, 00:34:05.292 "ddgst": ${ddgst:-false} 00:34:05.292 }, 00:34:05.292 "method": "bdev_nvme_attach_controller" 00:34:05.292 } 00:34:05.292 EOF 00:34:05.292 )") 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.292 { 00:34:05.292 "params": { 00:34:05.292 "name": "Nvme$subsystem", 00:34:05.292 "trtype": "$TEST_TRANSPORT", 00:34:05.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.292 "adrfam": "ipv4", 00:34:05.292 "trsvcid": "$NVMF_PORT", 00:34:05.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.292 "hdgst": ${hdgst:-false}, 00:34:05.292 "ddgst": ${ddgst:-false} 00:34:05.292 }, 00:34:05.292 "method": "bdev_nvme_attach_controller" 00:34:05.292 } 00:34:05.292 EOF 00:34:05.292 )") 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.292 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.292 { 00:34:05.292 "params": { 00:34:05.292 "name": "Nvme$subsystem", 00:34:05.293 "trtype": "$TEST_TRANSPORT", 00:34:05.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.293 "adrfam": "ipv4", 00:34:05.293 "trsvcid": "$NVMF_PORT", 00:34:05.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.293 "hdgst": ${hdgst:-false}, 00:34:05.293 "ddgst": ${ddgst:-false} 00:34:05.293 }, 00:34:05.293 "method": "bdev_nvme_attach_controller" 00:34:05.293 } 00:34:05.293 EOF 00:34:05.293 )") 00:34:05.293 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:05.293 [2024-06-11 14:01:58.177185] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:05.293 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.293 [2024-06-11 14:01:58.177255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2305095 ] 00:34:05.293 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.293 { 00:34:05.293 "params": { 00:34:05.293 "name": "Nvme$subsystem", 00:34:05.293 "trtype": "$TEST_TRANSPORT", 00:34:05.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.293 "adrfam": "ipv4", 00:34:05.293 "trsvcid": "$NVMF_PORT", 00:34:05.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.293 "hdgst": ${hdgst:-false}, 00:34:05.293 "ddgst": ${ddgst:-false} 00:34:05.293 }, 00:34:05.293 "method": "bdev_nvme_attach_controller" 00:34:05.293 } 00:34:05.293 EOF 00:34:05.293 )") 00:34:05.293 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:05.293 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.293 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.293 { 00:34:05.293 "params": { 00:34:05.293 "name": "Nvme$subsystem", 00:34:05.293 "trtype": "$TEST_TRANSPORT", 00:34:05.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.293 "adrfam": "ipv4", 00:34:05.293 "trsvcid": "$NVMF_PORT", 00:34:05.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.293 "hdgst": ${hdgst:-false}, 00:34:05.293 "ddgst": ${ddgst:-false} 00:34:05.293 }, 00:34:05.293 "method": "bdev_nvme_attach_controller" 00:34:05.293 } 00:34:05.293 EOF 00:34:05.293 )") 00:34:05.293 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:05.293 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.293 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.293 { 00:34:05.293 "params": { 00:34:05.293 "name": "Nvme$subsystem", 00:34:05.293 "trtype": "$TEST_TRANSPORT", 00:34:05.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.293 "adrfam": "ipv4", 00:34:05.293 "trsvcid": "$NVMF_PORT", 00:34:05.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.293 "hdgst": ${hdgst:-false}, 00:34:05.293 "ddgst": ${ddgst:-false} 00:34:05.293 }, 00:34:05.293 "method": "bdev_nvme_attach_controller" 00:34:05.293 } 00:34:05.293 EOF 00:34:05.293 )") 00:34:05.293 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:34:05.293 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:34:05.554 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:34:05.554 14:01:58 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:05.554 "params": { 00:34:05.554 "name": "Nvme1", 00:34:05.554 "trtype": "rdma", 00:34:05.554 "traddr": "192.168.100.8", 00:34:05.554 "adrfam": "ipv4", 00:34:05.554 "trsvcid": "4420", 00:34:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:05.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:05.554 "hdgst": false, 00:34:05.554 "ddgst": false 00:34:05.554 }, 00:34:05.554 "method": "bdev_nvme_attach_controller" 00:34:05.554 },{ 00:34:05.554 "params": { 00:34:05.554 "name": "Nvme2", 00:34:05.554 "trtype": "rdma", 00:34:05.554 "traddr": "192.168.100.8", 00:34:05.554 "adrfam": "ipv4", 00:34:05.554 "trsvcid": "4420", 00:34:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:05.554 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:05.554 "hdgst": false, 00:34:05.554 "ddgst": false 00:34:05.554 }, 00:34:05.554 "method": "bdev_nvme_attach_controller" 00:34:05.554 },{ 00:34:05.554 "params": { 00:34:05.554 "name": "Nvme3", 00:34:05.554 "trtype": "rdma", 00:34:05.554 "traddr": "192.168.100.8", 00:34:05.554 "adrfam": "ipv4", 00:34:05.554 "trsvcid": "4420", 00:34:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:05.554 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:05.554 "hdgst": false, 00:34:05.554 "ddgst": false 00:34:05.554 }, 00:34:05.554 "method": "bdev_nvme_attach_controller" 00:34:05.554 },{ 00:34:05.554 "params": { 00:34:05.554 "name": "Nvme4", 00:34:05.554 "trtype": "rdma", 00:34:05.554 "traddr": "192.168.100.8", 00:34:05.554 "adrfam": "ipv4", 00:34:05.554 "trsvcid": "4420", 00:34:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:05.554 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:05.554 "hdgst": false, 00:34:05.554 "ddgst": false 00:34:05.554 }, 00:34:05.554 "method": "bdev_nvme_attach_controller" 00:34:05.554 },{ 00:34:05.554 "params": { 00:34:05.554 "name": "Nvme5", 00:34:05.554 "trtype": "rdma", 00:34:05.554 "traddr": "192.168.100.8", 00:34:05.554 "adrfam": "ipv4", 00:34:05.554 "trsvcid": "4420", 00:34:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:05.554 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:05.554 "hdgst": false, 00:34:05.554 "ddgst": false 00:34:05.554 }, 00:34:05.554 "method": "bdev_nvme_attach_controller" 00:34:05.554 },{ 00:34:05.554 "params": { 00:34:05.554 "name": "Nvme6", 00:34:05.554 "trtype": "rdma", 00:34:05.554 "traddr": "192.168.100.8", 00:34:05.554 "adrfam": "ipv4", 00:34:05.554 "trsvcid": "4420", 00:34:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:05.554 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:05.554 "hdgst": false, 00:34:05.554 "ddgst": false 00:34:05.554 }, 00:34:05.554 "method": "bdev_nvme_attach_controller" 00:34:05.554 },{ 00:34:05.554 "params": { 00:34:05.554 "name": "Nvme7", 00:34:05.554 "trtype": "rdma", 00:34:05.554 "traddr": "192.168.100.8", 00:34:05.554 "adrfam": "ipv4", 00:34:05.554 "trsvcid": "4420", 00:34:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:05.554 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:05.554 "hdgst": false, 00:34:05.554 "ddgst": false 00:34:05.554 }, 00:34:05.554 "method": "bdev_nvme_attach_controller" 00:34:05.554 },{ 00:34:05.554 "params": { 00:34:05.554 "name": "Nvme8", 00:34:05.554 "trtype": "rdma", 00:34:05.554 "traddr": "192.168.100.8", 00:34:05.554 "adrfam": "ipv4", 00:34:05.554 "trsvcid": "4420", 00:34:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:05.554 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:05.554 "hdgst": false, 00:34:05.554 "ddgst": false 00:34:05.554 }, 00:34:05.554 "method": "bdev_nvme_attach_controller" 00:34:05.554 },{ 00:34:05.554 "params": { 00:34:05.554 "name": "Nvme9", 00:34:05.554 "trtype": "rdma", 00:34:05.554 "traddr": "192.168.100.8", 00:34:05.554 "adrfam": "ipv4", 00:34:05.554 "trsvcid": "4420", 00:34:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:05.554 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:05.554 "hdgst": false, 00:34:05.554 "ddgst": false 00:34:05.554 }, 00:34:05.554 "method": "bdev_nvme_attach_controller" 00:34:05.554 },{ 00:34:05.554 "params": { 00:34:05.554 "name": "Nvme10", 00:34:05.554 "trtype": "rdma", 00:34:05.554 "traddr": "192.168.100.8", 00:34:05.554 "adrfam": "ipv4", 00:34:05.554 "trsvcid": "4420", 00:34:05.554 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:05.554 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:05.554 "hdgst": false, 00:34:05.554 "ddgst": false 00:34:05.554 }, 00:34:05.554 "method": "bdev_nvme_attach_controller" 00:34:05.554 }' 00:34:05.554 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.554 [2024-06-11 14:01:58.241564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.554 [2024-06-11 14:01:58.305850] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.494 Running I/O for 1 seconds... 00:34:07.875 00:34:07.875 Latency(us) 00:34:07.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.875 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:07.875 Verification LBA range: start 0x0 length 0x400 00:34:07.875 Nvme1n1 : 1.21 290.83 18.18 0.00 0.00 216442.88 9721.17 234181.97 00:34:07.875 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:07.876 Verification LBA range: start 0x0 length 0x400 00:34:07.876 Nvme2n1 : 1.21 287.18 17.95 0.00 0.00 215502.80 10048.85 222822.40 00:34:07.876 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:07.876 Verification LBA range: start 0x0 length 0x400 00:34:07.876 Nvme3n1 : 1.21 281.09 17.57 0.00 0.00 216395.40 10212.69 211462.83 00:34:07.876 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:07.876 Verification LBA range: start 0x0 length 0x400 00:34:07.876 Nvme4n1 : 1.21 316.13 19.76 0.00 0.00 189512.75 2894.51 174762.67 00:34:07.876 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:07.876 Verification LBA range: start 0x0 length 0x400 00:34:07.876 Nvme5n1 : 1.22 315.58 19.72 0.00 0.00 187480.75 11687.25 164276.91 00:34:07.876 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:07.876 Verification LBA range: start 0x0 length 0x400 00:34:07.876 Nvme6n1 : 1.22 315.03 19.69 0.00 0.00 184497.49 12834.13 145053.01 00:34:07.876 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:07.876 Verification LBA range: start 0x0 length 0x400 00:34:07.876 Nvme7n1 : 1.22 314.50 19.66 0.00 0.00 181342.08 12397.23 127576.75 00:34:07.876 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:07.876 Verification LBA range: start 0x0 length 0x400 00:34:07.876 Nvme8n1 : 1.22 313.96 19.62 0.00 0.00 178587.31 15073.28 119712.43 00:34:07.876 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:07.876 Verification LBA range: start 0x0 length 0x400 00:34:07.876 Nvme9n1 : 1.23 313.41 19.59 0.00 0.00 175609.17 16165.55 138936.32 00:34:07.876 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:07.876 Verification LBA range: start 0x0 length 0x400 00:34:07.876 Nvme10n1 : 1.23 260.73 16.30 0.00 0.00 207030.87 10321.92 234181.97 00:34:07.876 =================================================================================================================== 00:34:07.876 Total : 3008.44 188.03 0.00 0.00 194401.89 2894.51 234181.97 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:34:07.876 rmmod nvme_rdma 00:34:07.876 rmmod nvme_fabrics 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2304339 ']' 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2304339 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 2304339 ']' 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 2304339 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2304339 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2304339' 00:34:07.876 killing process with pid 2304339 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 2304339 00:34:07.876 14:02:00 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 2304339 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:34:08.447 00:34:08.447 real 0m13.897s 00:34:08.447 user 0m30.447s 00:34:08.447 sys 0m6.273s 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:08.447 ************************************ 00:34:08.447 END TEST nvmf_shutdown_tc1 00:34:08.447 ************************************ 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:08.447 ************************************ 00:34:08.447 START TEST nvmf_shutdown_tc2 00:34:08.447 ************************************ 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:34:08.447 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:34:08.447 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:08.447 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:34:08.448 Found net devices under 0000:98:00.0: mlx_0_0 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:34:08.448 Found net devices under 0000:98:00.1: mlx_0_1 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:34:08.448 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:08.448 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:34:08.448 altname enp152s0f0np0 00:34:08.448 altname ens817f0np0 00:34:08.448 inet 192.168.100.8/24 scope global mlx_0_0 00:34:08.448 valid_lft forever preferred_lft forever 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:34:08.448 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:08.448 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:34:08.448 altname enp152s0f1np1 00:34:08.448 altname ens817f1np1 00:34:08.448 inet 192.168.100.9/24 scope global mlx_0_1 00:34:08.448 valid_lft forever preferred_lft forever 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:08.448 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:34:08.449 192.168.100.9' 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:34:08.449 192.168.100.9' 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:34:08.449 192.168.100.9' 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:34:08.449 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2305863 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2305863 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2305863 ']' 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:08.709 14:02:01 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:08.709 [2024-06-11 14:02:01.445051] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:08.709 [2024-06-11 14:02:01.445114] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.709 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.709 [2024-06-11 14:02:01.526875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:08.709 [2024-06-11 14:02:01.588058] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.709 [2024-06-11 14:02:01.588092] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.709 [2024-06-11 14:02:01.588098] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.709 [2024-06-11 14:02:01.588103] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.709 [2024-06-11 14:02:01.588108] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.709 [2024-06-11 14:02:01.588246] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:08.709 [2024-06-11 14:02:01.588464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:08.709 [2024-06-11 14:02:01.588621] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.709 [2024-06-11 14:02:01.588622] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:09.651 [2024-06-11 14:02:02.297517] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21d10d0/0x21d55c0) succeed. 00:34:09.651 [2024-06-11 14:02:02.308258] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21d2710/0x2216c50) succeed. 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.651 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:09.651 Malloc1 00:34:09.651 [2024-06-11 14:02:02.502535] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:09.651 Malloc2 00:34:09.651 Malloc3 00:34:09.911 Malloc4 00:34:09.911 Malloc5 00:34:09.911 Malloc6 00:34:09.911 Malloc7 00:34:09.911 Malloc8 00:34:09.911 Malloc9 00:34:10.171 Malloc10 00:34:10.171 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.171 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:34:10.171 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:10.171 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.171 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2306195 00:34:10.171 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2306195 /var/tmp/bdevperf.sock 00:34:10.171 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2306195 ']' 00:34:10.171 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:10.171 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:10.171 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:10.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.172 { 00:34:10.172 "params": { 00:34:10.172 "name": "Nvme$subsystem", 00:34:10.172 "trtype": "$TEST_TRANSPORT", 00:34:10.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.172 "adrfam": "ipv4", 00:34:10.172 "trsvcid": "$NVMF_PORT", 00:34:10.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.172 "hdgst": ${hdgst:-false}, 00:34:10.172 "ddgst": ${ddgst:-false} 00:34:10.172 }, 00:34:10.172 "method": "bdev_nvme_attach_controller" 00:34:10.172 } 00:34:10.172 EOF 00:34:10.172 )") 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.172 { 00:34:10.172 "params": { 00:34:10.172 "name": "Nvme$subsystem", 00:34:10.172 "trtype": "$TEST_TRANSPORT", 00:34:10.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.172 "adrfam": "ipv4", 00:34:10.172 "trsvcid": "$NVMF_PORT", 00:34:10.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.172 "hdgst": ${hdgst:-false}, 00:34:10.172 "ddgst": ${ddgst:-false} 00:34:10.172 }, 00:34:10.172 "method": "bdev_nvme_attach_controller" 00:34:10.172 } 00:34:10.172 EOF 00:34:10.172 )") 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.172 { 00:34:10.172 "params": { 00:34:10.172 "name": "Nvme$subsystem", 00:34:10.172 "trtype": "$TEST_TRANSPORT", 00:34:10.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.172 "adrfam": "ipv4", 00:34:10.172 "trsvcid": "$NVMF_PORT", 00:34:10.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.172 "hdgst": ${hdgst:-false}, 00:34:10.172 "ddgst": ${ddgst:-false} 00:34:10.172 }, 00:34:10.172 "method": "bdev_nvme_attach_controller" 00:34:10.172 } 00:34:10.172 EOF 00:34:10.172 )") 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.172 { 00:34:10.172 "params": { 00:34:10.172 "name": "Nvme$subsystem", 00:34:10.172 "trtype": "$TEST_TRANSPORT", 00:34:10.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.172 "adrfam": "ipv4", 00:34:10.172 "trsvcid": "$NVMF_PORT", 00:34:10.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.172 "hdgst": ${hdgst:-false}, 00:34:10.172 "ddgst": ${ddgst:-false} 00:34:10.172 }, 00:34:10.172 "method": "bdev_nvme_attach_controller" 00:34:10.172 } 00:34:10.172 EOF 00:34:10.172 )") 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.172 { 00:34:10.172 "params": { 00:34:10.172 "name": "Nvme$subsystem", 00:34:10.172 "trtype": "$TEST_TRANSPORT", 00:34:10.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.172 "adrfam": "ipv4", 00:34:10.172 "trsvcid": "$NVMF_PORT", 00:34:10.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.172 "hdgst": ${hdgst:-false}, 00:34:10.172 "ddgst": ${ddgst:-false} 00:34:10.172 }, 00:34:10.172 "method": "bdev_nvme_attach_controller" 00:34:10.172 } 00:34:10.172 EOF 00:34:10.172 )") 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.172 { 00:34:10.172 "params": { 00:34:10.172 "name": "Nvme$subsystem", 00:34:10.172 "trtype": "$TEST_TRANSPORT", 00:34:10.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.172 "adrfam": "ipv4", 00:34:10.172 "trsvcid": "$NVMF_PORT", 00:34:10.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.172 "hdgst": ${hdgst:-false}, 00:34:10.172 "ddgst": ${ddgst:-false} 00:34:10.172 }, 00:34:10.172 "method": "bdev_nvme_attach_controller" 00:34:10.172 } 00:34:10.172 EOF 00:34:10.172 )") 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.172 { 00:34:10.172 "params": { 00:34:10.172 "name": "Nvme$subsystem", 00:34:10.172 "trtype": "$TEST_TRANSPORT", 00:34:10.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.172 "adrfam": "ipv4", 00:34:10.172 "trsvcid": "$NVMF_PORT", 00:34:10.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.172 "hdgst": ${hdgst:-false}, 00:34:10.172 "ddgst": ${ddgst:-false} 00:34:10.172 }, 00:34:10.172 "method": "bdev_nvme_attach_controller" 00:34:10.172 } 00:34:10.172 EOF 00:34:10.172 )") 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.172 { 00:34:10.172 "params": { 00:34:10.172 "name": "Nvme$subsystem", 00:34:10.172 "trtype": "$TEST_TRANSPORT", 00:34:10.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.172 "adrfam": "ipv4", 00:34:10.172 "trsvcid": "$NVMF_PORT", 00:34:10.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.172 "hdgst": ${hdgst:-false}, 00:34:10.172 "ddgst": ${ddgst:-false} 00:34:10.172 }, 00:34:10.172 "method": "bdev_nvme_attach_controller" 00:34:10.172 } 00:34:10.172 EOF 00:34:10.172 )") 00:34:10.172 [2024-06-11 14:02:02.962790] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:10.172 [2024-06-11 14:02:02.962859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306195 ] 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.172 { 00:34:10.172 "params": { 00:34:10.172 "name": "Nvme$subsystem", 00:34:10.172 "trtype": "$TEST_TRANSPORT", 00:34:10.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.172 "adrfam": "ipv4", 00:34:10.172 "trsvcid": "$NVMF_PORT", 00:34:10.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.172 "hdgst": ${hdgst:-false}, 00:34:10.172 "ddgst": ${ddgst:-false} 00:34:10.172 }, 00:34:10.172 "method": "bdev_nvme_attach_controller" 00:34:10.172 } 00:34:10.172 EOF 00:34:10.172 )") 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.172 { 00:34:10.172 "params": { 00:34:10.172 "name": "Nvme$subsystem", 00:34:10.172 "trtype": "$TEST_TRANSPORT", 00:34:10.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.172 "adrfam": "ipv4", 00:34:10.172 "trsvcid": "$NVMF_PORT", 00:34:10.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.172 "hdgst": ${hdgst:-false}, 00:34:10.172 "ddgst": ${ddgst:-false} 00:34:10.172 }, 00:34:10.172 "method": "bdev_nvme_attach_controller" 00:34:10.172 } 00:34:10.172 EOF 00:34:10.172 )") 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:34:10.172 14:02:02 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:10.172 "params": { 00:34:10.172 "name": "Nvme1", 00:34:10.172 "trtype": "rdma", 00:34:10.172 "traddr": "192.168.100.8", 00:34:10.173 "adrfam": "ipv4", 00:34:10.173 "trsvcid": "4420", 00:34:10.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:10.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:10.173 "hdgst": false, 00:34:10.173 "ddgst": false 00:34:10.173 }, 00:34:10.173 "method": "bdev_nvme_attach_controller" 00:34:10.173 },{ 00:34:10.173 "params": { 00:34:10.173 "name": "Nvme2", 00:34:10.173 "trtype": "rdma", 00:34:10.173 "traddr": "192.168.100.8", 00:34:10.173 "adrfam": "ipv4", 00:34:10.173 "trsvcid": "4420", 00:34:10.173 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:10.173 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:10.173 "hdgst": false, 00:34:10.173 "ddgst": false 00:34:10.173 }, 00:34:10.173 "method": "bdev_nvme_attach_controller" 00:34:10.173 },{ 00:34:10.173 "params": { 00:34:10.173 "name": "Nvme3", 00:34:10.173 "trtype": "rdma", 00:34:10.173 "traddr": "192.168.100.8", 00:34:10.173 "adrfam": "ipv4", 00:34:10.173 "trsvcid": "4420", 00:34:10.173 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:10.173 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:10.173 "hdgst": false, 00:34:10.173 "ddgst": false 00:34:10.173 }, 00:34:10.173 "method": "bdev_nvme_attach_controller" 00:34:10.173 },{ 00:34:10.173 "params": { 00:34:10.173 "name": "Nvme4", 00:34:10.173 "trtype": "rdma", 00:34:10.173 "traddr": "192.168.100.8", 00:34:10.173 "adrfam": "ipv4", 00:34:10.173 "trsvcid": "4420", 00:34:10.173 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:10.173 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:10.173 "hdgst": false, 00:34:10.173 "ddgst": false 00:34:10.173 }, 00:34:10.173 "method": "bdev_nvme_attach_controller" 00:34:10.173 },{ 00:34:10.173 "params": { 00:34:10.173 "name": "Nvme5", 00:34:10.173 "trtype": "rdma", 00:34:10.173 "traddr": "192.168.100.8", 00:34:10.173 "adrfam": "ipv4", 00:34:10.173 "trsvcid": "4420", 00:34:10.173 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:10.173 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:10.173 "hdgst": false, 00:34:10.173 "ddgst": false 00:34:10.173 }, 00:34:10.173 "method": "bdev_nvme_attach_controller" 00:34:10.173 },{ 00:34:10.173 "params": { 00:34:10.173 "name": "Nvme6", 00:34:10.173 "trtype": "rdma", 00:34:10.173 "traddr": "192.168.100.8", 00:34:10.173 "adrfam": "ipv4", 00:34:10.173 "trsvcid": "4420", 00:34:10.173 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:10.173 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:10.173 "hdgst": false, 00:34:10.173 "ddgst": false 00:34:10.173 }, 00:34:10.173 "method": "bdev_nvme_attach_controller" 00:34:10.173 },{ 00:34:10.173 "params": { 00:34:10.173 "name": "Nvme7", 00:34:10.173 "trtype": "rdma", 00:34:10.173 "traddr": "192.168.100.8", 00:34:10.173 "adrfam": "ipv4", 00:34:10.173 "trsvcid": "4420", 00:34:10.173 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:10.173 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:10.173 "hdgst": false, 00:34:10.173 "ddgst": false 00:34:10.173 }, 00:34:10.173 "method": "bdev_nvme_attach_controller" 00:34:10.173 },{ 00:34:10.173 "params": { 00:34:10.173 "name": "Nvme8", 00:34:10.173 "trtype": "rdma", 00:34:10.173 "traddr": "192.168.100.8", 00:34:10.173 "adrfam": "ipv4", 00:34:10.173 "trsvcid": "4420", 00:34:10.173 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:10.173 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:10.173 "hdgst": false, 00:34:10.173 "ddgst": false 00:34:10.173 }, 00:34:10.173 "method": "bdev_nvme_attach_controller" 00:34:10.173 },{ 00:34:10.173 "params": { 00:34:10.173 "name": "Nvme9", 00:34:10.173 "trtype": "rdma", 00:34:10.173 "traddr": "192.168.100.8", 00:34:10.173 "adrfam": "ipv4", 00:34:10.173 "trsvcid": "4420", 00:34:10.173 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:10.173 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:10.173 "hdgst": false, 00:34:10.173 "ddgst": false 00:34:10.173 }, 00:34:10.173 "method": "bdev_nvme_attach_controller" 00:34:10.173 },{ 00:34:10.173 "params": { 00:34:10.173 "name": "Nvme10", 00:34:10.173 "trtype": "rdma", 00:34:10.173 "traddr": "192.168.100.8", 00:34:10.173 "adrfam": "ipv4", 00:34:10.173 "trsvcid": "4420", 00:34:10.173 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:10.173 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:10.173 "hdgst": false, 00:34:10.173 "ddgst": false 00:34:10.173 }, 00:34:10.173 "method": "bdev_nvme_attach_controller" 00:34:10.173 }' 00:34:10.173 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.173 [2024-06-11 14:02:03.025351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.433 [2024-06-11 14:02:03.089667] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.373 Running I/O for 10 seconds... 00:34:11.373 14:02:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:11.373 14:02:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:34:11.373 14:02:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:11.373 14:02:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.373 14:02:03 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:11.373 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.373 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:34:11.373 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:11.373 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:34:11.373 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:34:11.374 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:34:11.374 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:34:11.374 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:34:11.374 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:11.374 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:34:11.374 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.374 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:11.634 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.634 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:34:11.634 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:34:11.634 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:34:11.894 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:34:11.894 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:34:11.894 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:11.894 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:34:11.894 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.894 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:11.894 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.894 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=155 00:34:11.894 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 155 -ge 100 ']' 00:34:11.894 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:34:11.894 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2306195 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 2306195 ']' 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 2306195 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2306195 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2306195' 00:34:12.155 killing process with pid 2306195 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 2306195 00:34:12.155 14:02:04 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 2306195 00:34:12.155 Received shutdown signal, test time was about 1.028498 seconds 00:34:12.155 00:34:12.155 Latency(us) 00:34:12.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.155 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:12.155 Verification LBA range: start 0x0 length 0x400 00:34:12.155 Nvme1n1 : 1.01 276.74 17.30 0.00 0.00 227004.22 9393.49 239424.85 00:34:12.155 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:12.155 Verification LBA range: start 0x0 length 0x400 00:34:12.155 Nvme2n1 : 1.01 283.26 17.70 0.00 0.00 217848.11 9721.17 230686.72 00:34:12.155 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:12.155 Verification LBA range: start 0x0 length 0x400 00:34:12.155 Nvme3n1 : 1.01 290.75 18.17 0.00 0.00 208156.27 4150.61 216705.71 00:34:12.155 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:12.155 Verification LBA range: start 0x0 length 0x400 00:34:12.155 Nvme4n1 : 1.02 314.93 19.68 0.00 0.00 188215.30 6498.99 170393.60 00:34:12.155 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:12.155 Verification LBA range: start 0x0 length 0x400 00:34:12.155 Nvme5n1 : 1.02 314.37 19.65 0.00 0.00 185390.68 11414.19 162529.28 00:34:12.155 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:12.155 Verification LBA range: start 0x0 length 0x400 00:34:12.155 Nvme6n1 : 1.02 313.77 19.61 0.00 0.00 182150.83 12397.23 145926.83 00:34:12.155 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:12.155 Verification LBA range: start 0x0 length 0x400 00:34:12.155 Nvme7n1 : 1.02 313.22 19.58 0.00 0.00 178235.65 13216.43 129324.37 00:34:12.155 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:12.155 Verification LBA range: start 0x0 length 0x400 00:34:12.155 Nvme8n1 : 1.02 312.64 19.54 0.00 0.00 175010.82 14090.24 118838.61 00:34:12.155 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:12.155 Verification LBA range: start 0x0 length 0x400 00:34:12.155 Nvme9n1 : 1.03 312.05 19.50 0.00 0.00 171563.01 15073.28 136314.88 00:34:12.156 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:12.156 Verification LBA range: start 0x0 length 0x400 00:34:12.156 Nvme10n1 : 1.03 249.16 15.57 0.00 0.00 209949.23 10048.85 248162.99 00:34:12.156 =================================================================================================================== 00:34:12.156 Total : 2980.90 186.31 0.00 0.00 193225.11 4150.61 248162.99 00:34:12.417 14:02:05 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2305863 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:13.359 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:34:13.359 rmmod nvme_rdma 00:34:13.621 rmmod nvme_fabrics 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2305863 ']' 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2305863 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 2305863 ']' 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 2305863 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2305863 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2305863' 00:34:13.621 killing process with pid 2305863 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 2305863 00:34:13.621 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 2305863 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:34:13.882 00:34:13.882 real 0m5.514s 00:34:13.882 user 0m22.413s 00:34:13.882 sys 0m0.995s 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:13.882 ************************************ 00:34:13.882 END TEST nvmf_shutdown_tc2 00:34:13.882 ************************************ 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:13.882 ************************************ 00:34:13.882 START TEST nvmf_shutdown_tc3 00:34:13.882 ************************************ 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:34:13.882 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:34:13.882 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:34:13.882 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:34:13.883 Found net devices under 0000:98:00.0: mlx_0_0 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:34:13.883 Found net devices under 0000:98:00.1: mlx_0_1 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:34:13.883 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:34:14.145 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:14.145 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:34:14.145 altname enp152s0f0np0 00:34:14.145 altname ens817f0np0 00:34:14.145 inet 192.168.100.8/24 scope global mlx_0_0 00:34:14.145 valid_lft forever preferred_lft forever 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:34:14.145 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:14.145 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:34:14.145 altname enp152s0f1np1 00:34:14.145 altname ens817f1np1 00:34:14.145 inet 192.168.100.9/24 scope global mlx_0_1 00:34:14.145 valid_lft forever preferred_lft forever 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:34:14.145 192.168.100.9' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:34:14.145 192.168.100.9' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:34:14.145 192.168.100.9' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2307042 00:34:14.145 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2307042 00:34:14.146 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:34:14.146 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 2307042 ']' 00:34:14.146 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.146 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:14.146 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.146 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:14.146 14:02:06 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:14.146 [2024-06-11 14:02:07.038987] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:14.146 [2024-06-11 14:02:07.039052] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.406 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.406 [2024-06-11 14:02:07.114266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:14.406 [2024-06-11 14:02:07.169576] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.406 [2024-06-11 14:02:07.169611] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.406 [2024-06-11 14:02:07.169616] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.406 [2024-06-11 14:02:07.169621] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.406 [2024-06-11 14:02:07.169625] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.406 [2024-06-11 14:02:07.169739] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:14.406 [2024-06-11 14:02:07.169898] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:14.406 [2024-06-11 14:02:07.170104] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:34:14.406 [2024-06-11 14:02:07.170231] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:14.978 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:14.978 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:34:14.978 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:14.978 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:14.978 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:14.978 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:14.978 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:14.978 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.978 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:14.978 [2024-06-11 14:02:07.883353] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x243e0d0/0x24425c0) succeed. 00:34:15.240 [2024-06-11 14:02:07.894623] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x243f710/0x2483c50) succeed. 00:34:15.240 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.240 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:34:15.240 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:34:15.240 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:15.240 14:02:07 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.240 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:15.240 Malloc1 00:34:15.240 [2024-06-11 14:02:08.088776] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:15.240 Malloc2 00:34:15.240 Malloc3 00:34:15.502 Malloc4 00:34:15.502 Malloc5 00:34:15.502 Malloc6 00:34:15.502 Malloc7 00:34:15.502 Malloc8 00:34:15.502 Malloc9 00:34:15.765 Malloc10 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2307423 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2307423 /var/tmp/bdevperf.sock 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 2307423 ']' 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:15.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.765 { 00:34:15.765 "params": { 00:34:15.765 "name": "Nvme$subsystem", 00:34:15.765 "trtype": "$TEST_TRANSPORT", 00:34:15.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.765 "adrfam": "ipv4", 00:34:15.765 "trsvcid": "$NVMF_PORT", 00:34:15.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.765 "hdgst": ${hdgst:-false}, 00:34:15.765 "ddgst": ${ddgst:-false} 00:34:15.765 }, 00:34:15.765 "method": "bdev_nvme_attach_controller" 00:34:15.765 } 00:34:15.765 EOF 00:34:15.765 )") 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.765 { 00:34:15.765 "params": { 00:34:15.765 "name": "Nvme$subsystem", 00:34:15.765 "trtype": "$TEST_TRANSPORT", 00:34:15.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.765 "adrfam": "ipv4", 00:34:15.765 "trsvcid": "$NVMF_PORT", 00:34:15.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.765 "hdgst": ${hdgst:-false}, 00:34:15.765 "ddgst": ${ddgst:-false} 00:34:15.765 }, 00:34:15.765 "method": "bdev_nvme_attach_controller" 00:34:15.765 } 00:34:15.765 EOF 00:34:15.765 )") 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.765 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.765 { 00:34:15.765 "params": { 00:34:15.765 "name": "Nvme$subsystem", 00:34:15.765 "trtype": "$TEST_TRANSPORT", 00:34:15.765 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.765 "adrfam": "ipv4", 00:34:15.765 "trsvcid": "$NVMF_PORT", 00:34:15.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.765 "hdgst": ${hdgst:-false}, 00:34:15.765 "ddgst": ${ddgst:-false} 00:34:15.765 }, 00:34:15.765 "method": "bdev_nvme_attach_controller" 00:34:15.766 } 00:34:15.766 EOF 00:34:15.766 )") 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.766 { 00:34:15.766 "params": { 00:34:15.766 "name": "Nvme$subsystem", 00:34:15.766 "trtype": "$TEST_TRANSPORT", 00:34:15.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.766 "adrfam": "ipv4", 00:34:15.766 "trsvcid": "$NVMF_PORT", 00:34:15.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.766 "hdgst": ${hdgst:-false}, 00:34:15.766 "ddgst": ${ddgst:-false} 00:34:15.766 }, 00:34:15.766 "method": "bdev_nvme_attach_controller" 00:34:15.766 } 00:34:15.766 EOF 00:34:15.766 )") 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.766 { 00:34:15.766 "params": { 00:34:15.766 "name": "Nvme$subsystem", 00:34:15.766 "trtype": "$TEST_TRANSPORT", 00:34:15.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.766 "adrfam": "ipv4", 00:34:15.766 "trsvcid": "$NVMF_PORT", 00:34:15.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.766 "hdgst": ${hdgst:-false}, 00:34:15.766 "ddgst": ${ddgst:-false} 00:34:15.766 }, 00:34:15.766 "method": "bdev_nvme_attach_controller" 00:34:15.766 } 00:34:15.766 EOF 00:34:15.766 )") 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.766 { 00:34:15.766 "params": { 00:34:15.766 "name": "Nvme$subsystem", 00:34:15.766 "trtype": "$TEST_TRANSPORT", 00:34:15.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.766 "adrfam": "ipv4", 00:34:15.766 "trsvcid": "$NVMF_PORT", 00:34:15.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.766 "hdgst": ${hdgst:-false}, 00:34:15.766 "ddgst": ${ddgst:-false} 00:34:15.766 }, 00:34:15.766 "method": "bdev_nvme_attach_controller" 00:34:15.766 } 00:34:15.766 EOF 00:34:15.766 )") 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.766 { 00:34:15.766 "params": { 00:34:15.766 "name": "Nvme$subsystem", 00:34:15.766 "trtype": "$TEST_TRANSPORT", 00:34:15.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.766 "adrfam": "ipv4", 00:34:15.766 "trsvcid": "$NVMF_PORT", 00:34:15.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.766 "hdgst": ${hdgst:-false}, 00:34:15.766 "ddgst": ${ddgst:-false} 00:34:15.766 }, 00:34:15.766 "method": "bdev_nvme_attach_controller" 00:34:15.766 } 00:34:15.766 EOF 00:34:15.766 )") 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:15.766 [2024-06-11 14:02:08.554902] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:15.766 [2024-06-11 14:02:08.554958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307423 ] 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.766 { 00:34:15.766 "params": { 00:34:15.766 "name": "Nvme$subsystem", 00:34:15.766 "trtype": "$TEST_TRANSPORT", 00:34:15.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.766 "adrfam": "ipv4", 00:34:15.766 "trsvcid": "$NVMF_PORT", 00:34:15.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.766 "hdgst": ${hdgst:-false}, 00:34:15.766 "ddgst": ${ddgst:-false} 00:34:15.766 }, 00:34:15.766 "method": "bdev_nvme_attach_controller" 00:34:15.766 } 00:34:15.766 EOF 00:34:15.766 )") 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.766 { 00:34:15.766 "params": { 00:34:15.766 "name": "Nvme$subsystem", 00:34:15.766 "trtype": "$TEST_TRANSPORT", 00:34:15.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.766 "adrfam": "ipv4", 00:34:15.766 "trsvcid": "$NVMF_PORT", 00:34:15.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.766 "hdgst": ${hdgst:-false}, 00:34:15.766 "ddgst": ${ddgst:-false} 00:34:15.766 }, 00:34:15.766 "method": "bdev_nvme_attach_controller" 00:34:15.766 } 00:34:15.766 EOF 00:34:15.766 )") 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:15.766 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:15.767 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:15.767 { 00:34:15.767 "params": { 00:34:15.767 "name": "Nvme$subsystem", 00:34:15.767 "trtype": "$TEST_TRANSPORT", 00:34:15.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.767 "adrfam": "ipv4", 00:34:15.767 "trsvcid": "$NVMF_PORT", 00:34:15.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.767 "hdgst": ${hdgst:-false}, 00:34:15.767 "ddgst": ${ddgst:-false} 00:34:15.767 }, 00:34:15.767 "method": "bdev_nvme_attach_controller" 00:34:15.767 } 00:34:15.767 EOF 00:34:15.767 )") 00:34:15.767 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:34:15.767 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:34:15.767 EAL: No free 2048 kB hugepages reported on node 1 00:34:15.767 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:34:15.767 14:02:08 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:15.767 "params": { 00:34:15.767 "name": "Nvme1", 00:34:15.767 "trtype": "rdma", 00:34:15.767 "traddr": "192.168.100.8", 00:34:15.767 "adrfam": "ipv4", 00:34:15.767 "trsvcid": "4420", 00:34:15.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.767 "hdgst": false, 00:34:15.767 "ddgst": false 00:34:15.767 }, 00:34:15.767 "method": "bdev_nvme_attach_controller" 00:34:15.767 },{ 00:34:15.767 "params": { 00:34:15.767 "name": "Nvme2", 00:34:15.767 "trtype": "rdma", 00:34:15.767 "traddr": "192.168.100.8", 00:34:15.767 "adrfam": "ipv4", 00:34:15.767 "trsvcid": "4420", 00:34:15.767 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:15.767 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:15.767 "hdgst": false, 00:34:15.767 "ddgst": false 00:34:15.767 }, 00:34:15.767 "method": "bdev_nvme_attach_controller" 00:34:15.767 },{ 00:34:15.767 "params": { 00:34:15.767 "name": "Nvme3", 00:34:15.767 "trtype": "rdma", 00:34:15.767 "traddr": "192.168.100.8", 00:34:15.767 "adrfam": "ipv4", 00:34:15.767 "trsvcid": "4420", 00:34:15.767 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:34:15.767 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:34:15.767 "hdgst": false, 00:34:15.767 "ddgst": false 00:34:15.767 }, 00:34:15.767 "method": "bdev_nvme_attach_controller" 00:34:15.767 },{ 00:34:15.767 "params": { 00:34:15.767 "name": "Nvme4", 00:34:15.767 "trtype": "rdma", 00:34:15.767 "traddr": "192.168.100.8", 00:34:15.767 "adrfam": "ipv4", 00:34:15.767 "trsvcid": "4420", 00:34:15.767 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:34:15.767 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:34:15.767 "hdgst": false, 00:34:15.767 "ddgst": false 00:34:15.767 }, 00:34:15.767 "method": "bdev_nvme_attach_controller" 00:34:15.767 },{ 00:34:15.767 "params": { 00:34:15.767 "name": "Nvme5", 00:34:15.767 "trtype": "rdma", 00:34:15.767 "traddr": "192.168.100.8", 00:34:15.767 "adrfam": "ipv4", 00:34:15.767 "trsvcid": "4420", 00:34:15.767 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:34:15.767 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:34:15.767 "hdgst": false, 00:34:15.767 "ddgst": false 00:34:15.767 }, 00:34:15.767 "method": "bdev_nvme_attach_controller" 00:34:15.767 },{ 00:34:15.767 "params": { 00:34:15.767 "name": "Nvme6", 00:34:15.767 "trtype": "rdma", 00:34:15.767 "traddr": "192.168.100.8", 00:34:15.767 "adrfam": "ipv4", 00:34:15.767 "trsvcid": "4420", 00:34:15.767 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:34:15.767 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:34:15.767 "hdgst": false, 00:34:15.767 "ddgst": false 00:34:15.767 }, 00:34:15.767 "method": "bdev_nvme_attach_controller" 00:34:15.767 },{ 00:34:15.767 "params": { 00:34:15.767 "name": "Nvme7", 00:34:15.767 "trtype": "rdma", 00:34:15.767 "traddr": "192.168.100.8", 00:34:15.767 "adrfam": "ipv4", 00:34:15.767 "trsvcid": "4420", 00:34:15.767 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:34:15.767 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:34:15.767 "hdgst": false, 00:34:15.767 "ddgst": false 00:34:15.767 }, 00:34:15.767 "method": "bdev_nvme_attach_controller" 00:34:15.767 },{ 00:34:15.767 "params": { 00:34:15.767 "name": "Nvme8", 00:34:15.767 "trtype": "rdma", 00:34:15.767 "traddr": "192.168.100.8", 00:34:15.767 "adrfam": "ipv4", 00:34:15.767 "trsvcid": "4420", 00:34:15.767 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:34:15.767 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:34:15.767 "hdgst": false, 00:34:15.767 "ddgst": false 00:34:15.767 }, 00:34:15.767 "method": "bdev_nvme_attach_controller" 00:34:15.767 },{ 00:34:15.767 "params": { 00:34:15.767 "name": "Nvme9", 00:34:15.767 "trtype": "rdma", 00:34:15.767 "traddr": "192.168.100.8", 00:34:15.767 "adrfam": "ipv4", 00:34:15.767 "trsvcid": "4420", 00:34:15.767 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:34:15.767 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:34:15.768 "hdgst": false, 00:34:15.768 "ddgst": false 00:34:15.768 }, 00:34:15.768 "method": "bdev_nvme_attach_controller" 00:34:15.768 },{ 00:34:15.768 "params": { 00:34:15.768 "name": "Nvme10", 00:34:15.768 "trtype": "rdma", 00:34:15.768 "traddr": "192.168.100.8", 00:34:15.768 "adrfam": "ipv4", 00:34:15.768 "trsvcid": "4420", 00:34:15.768 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:34:15.768 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:34:15.768 "hdgst": false, 00:34:15.768 "ddgst": false 00:34:15.768 }, 00:34:15.768 "method": "bdev_nvme_attach_controller" 00:34:15.768 }' 00:34:15.768 [2024-06-11 14:02:08.615708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.028 [2024-06-11 14:02:08.681712] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.971 Running I/O for 10 seconds... 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:34:16.971 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:34:16.972 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:34:16.972 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:16.972 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:34:16.972 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.972 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:17.234 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.234 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=34 00:34:17.234 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 34 -ge 100 ']' 00:34:17.234 14:02:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2307042 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 2307042 ']' 00:34:17.495 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 2307042 00:34:17.756 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:34:17.756 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:17.756 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2307042 00:34:17.756 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:17.756 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:17.756 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2307042' 00:34:17.756 killing process with pid 2307042 00:34:17.756 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 2307042 00:34:17.756 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 2307042 00:34:18.016 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:34:18.016 14:02:10 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:34:18.967 [2024-06-11 14:02:11.562033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.967 [2024-06-11 14:02:11.562079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:3eff200 sqhd:c070 p:0 m:0 dnr:0 00:34:18.967 [2024-06-11 14:02:11.562090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.967 [2024-06-11 14:02:11.562098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:3eff200 sqhd:c070 p:0 m:0 dnr:0 00:34:18.967 [2024-06-11 14:02:11.562107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.967 [2024-06-11 14:02:11.562115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:3eff200 sqhd:c070 p:0 m:0 dnr:0 00:34:18.967 [2024-06-11 14:02:11.562123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.562130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:3eff200 sqhd:c070 p:0 m:0 dnr:0 00:34:18.968 [2024-06-11 14:02:11.565090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:18.968 [2024-06-11 14:02:11.565119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:34:18.968 [2024-06-11 14:02:11.565146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.565156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.565166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.565173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.565181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.565188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.565196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.565203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.567811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:18.968 [2024-06-11 14:02:11.567845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:34:18.968 [2024-06-11 14:02:11.567884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.567907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.567941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.567962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.567986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.568006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.568042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.568062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.570765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:18.968 [2024-06-11 14:02:11.570795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:34:18.968 [2024-06-11 14:02:11.570832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.570854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.570878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.570899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.570922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.570943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.570966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.570985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.573660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:18.968 [2024-06-11 14:02:11.573690] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:34:18.968 [2024-06-11 14:02:11.573728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.573750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.573773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.573793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.573817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.573837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.573860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.573880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.576554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:18.968 [2024-06-11 14:02:11.576585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:34:18.968 [2024-06-11 14:02:11.576624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.576646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.576669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.576690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.576713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.576733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.576756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.576776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.579537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:18.968 [2024-06-11 14:02:11.579551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:34:18.968 [2024-06-11 14:02:11.579564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.579572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.579580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.579587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.579594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.579601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.579609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.579616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.581958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:18.968 [2024-06-11 14:02:11.581987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:34:18.968 [2024-06-11 14:02:11.582053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.582077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.582100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.582127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.582150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.582170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.582192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.582213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.584772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:18.968 [2024-06-11 14:02:11.584802] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:34:18.968 [2024-06-11 14:02:11.584839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.584861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.584884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.584905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.968 [2024-06-11 14:02:11.584928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.968 [2024-06-11 14:02:11.584948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.969 [2024-06-11 14:02:11.584971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.969 [2024-06-11 14:02:11.584991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.969 [2024-06-11 14:02:11.587673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:18.969 [2024-06-11 14:02:11.587687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:34:18.969 [2024-06-11 14:02:11.587700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.969 [2024-06-11 14:02:11.587708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.969 [2024-06-11 14:02:11.587715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.969 [2024-06-11 14:02:11.587722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.969 [2024-06-11 14:02:11.587730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.969 [2024-06-11 14:02:11.587737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.969 [2024-06-11 14:02:11.587745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.969 [2024-06-11 14:02:11.587751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:44724 cdw0:3eff200 sqhd:1b00 p:1 m:1 dnr:0 00:34:18.969 [2024-06-11 14:02:11.590087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:18.969 [2024-06-11 14:02:11.590100] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:18.969 [2024-06-11 14:02:11.592841] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:34:18.969 [2024-06-11 14:02:11.592876] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.969 [2024-06-11 14:02:11.595464] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:34:18.969 [2024-06-11 14:02:11.595497] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.969 [2024-06-11 14:02:11.598239] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:34:18.969 [2024-06-11 14:02:11.598250] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.969 [2024-06-11 14:02:11.600884] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:34:18.969 [2024-06-11 14:02:11.600917] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.969 [2024-06-11 14:02:11.603639] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:34:18.969 [2024-06-11 14:02:11.603674] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.969 [2024-06-11 14:02:11.603909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a08fb00 len:0x10000 key:0x182d00 00:34:18.969 [2024-06-11 14:02:11.603935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.603977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a07fa80 len:0x10000 key:0x182d00 00:34:18.969 [2024-06-11 14:02:11.603999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a06fa00 len:0x10000 key:0x182d00 00:34:18.969 [2024-06-11 14:02:11.604142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x182d00 00:34:18.969 [2024-06-11 14:02:11.604199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x182d00 00:34:18.969 [2024-06-11 14:02:11.604255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a03f880 len:0x10000 key:0x182d00 00:34:18.969 [2024-06-11 14:02:11.604311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x182d00 00:34:18.969 [2024-06-11 14:02:11.604375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x182d00 00:34:18.969 [2024-06-11 14:02:11.604433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x182d00 00:34:18.969 [2024-06-11 14:02:11.604490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182c00 00:34:18.969 [2024-06-11 14:02:11.604547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e7f480 len:0x10000 key:0x182c00 00:34:18.969 [2024-06-11 14:02:11.604603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182c00 00:34:18.969 [2024-06-11 14:02:11.604659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182c00 00:34:18.969 [2024-06-11 14:02:11.604716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e4f300 len:0x10000 key:0x182c00 00:34:18.969 [2024-06-11 14:02:11.604772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182c00 00:34:18.969 [2024-06-11 14:02:11.604794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f200 len:0x10000 key:0x182c00 00:34:18.969 [2024-06-11 14:02:11.604813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e1f180 len:0x10000 key:0x182c00 00:34:18.969 [2024-06-11 14:02:11.604833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182c00 00:34:18.969 [2024-06-11 14:02:11.604852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3f0000 len:0x10000 key:0x183000 00:34:18.969 [2024-06-11 14:02:11.604873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x183000 00:34:18.969 [2024-06-11 14:02:11.604892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x183000 00:34:18.969 [2024-06-11 14:02:11.604911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3bfe80 len:0x10000 key:0x183000 00:34:18.969 [2024-06-11 14:02:11.604931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afe00 len:0x10000 key:0x183000 00:34:18.969 [2024-06-11 14:02:11.604950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.969 [2024-06-11 14:02:11.604962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183000 00:34:18.969 [2024-06-11 14:02:11.604970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.604982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a38fd00 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.604989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a37fc80 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34fb00 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a31f980 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff880 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef800 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf700 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f580 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26f400 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22f200 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183000 00:34:18.970 [2024-06-11 14:02:11.605460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x182e00 00:34:18.970 [2024-06-11 14:02:11.605478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x182e00 00:34:18.970 [2024-06-11 14:02:11.605497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x182e00 00:34:18.970 [2024-06-11 14:02:11.605520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x182e00 00:34:18.970 [2024-06-11 14:02:11.605539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x182e00 00:34:18.970 [2024-06-11 14:02:11.605558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fd80 len:0x10000 key:0x182e00 00:34:18.970 [2024-06-11 14:02:11.605577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x182e00 00:34:18.970 [2024-06-11 14:02:11.605596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x182e00 00:34:18.970 [2024-06-11 14:02:11.605615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a56fc00 len:0x10000 key:0x182e00 00:34:18.970 [2024-06-11 14:02:11.605634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.970 [2024-06-11 14:02:11.605646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0efe00 len:0x10000 key:0x182d00 00:34:18.970 [2024-06-11 14:02:11.605654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.605666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f519000 len:0x10000 key:0x182900 00:34:18.971 [2024-06-11 14:02:11.605673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.605688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4f8000 len:0x10000 key:0x182900 00:34:18.971 [2024-06-11 14:02:11.605695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.605707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4d7000 len:0x10000 key:0x182900 00:34:18.971 [2024-06-11 14:02:11.605714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.605727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4b6000 len:0x10000 key:0x182900 00:34:18.971 [2024-06-11 14:02:11.605735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.605748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f495000 len:0x10000 key:0x182900 00:34:18.971 [2024-06-11 14:02:11.605755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:22b8c80 sqhd:1700 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.608900] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:34:18.971 [2024-06-11 14:02:11.608912] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.971 [2024-06-11 14:02:11.608924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7f0000 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.608932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.608951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7dff80 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.608959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.608971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7cff00 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.608979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.608991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.608998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7afe00 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a79fd80 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a74fb00 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a71f980 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff880 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a66f400 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a65f380 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x183400 00:34:18.971 [2024-06-11 14:02:11.609491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.971 [2024-06-11 14:02:11.609503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183400 00:34:18.972 [2024-06-11 14:02:11.609511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183400 00:34:18.972 [2024-06-11 14:02:11.609529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.609981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.609989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.610000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.610008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.610062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.610070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.610082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183d00 00:34:18.972 [2024-06-11 14:02:11.610091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.972 [2024-06-11 14:02:11.610103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0x182e00 00:34:18.973 [2024-06-11 14:02:11.610110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.610122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f939000 len:0x10000 key:0x182900 00:34:18.973 [2024-06-11 14:02:11.610129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.610143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x182900 00:34:18.973 [2024-06-11 14:02:11.610150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.610162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f7000 len:0x10000 key:0x182900 00:34:18.973 [2024-06-11 14:02:11.610169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.610182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x182900 00:34:18.973 [2024-06-11 14:02:11.610190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.610202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8b5000 len:0x10000 key:0x182900 00:34:18.973 [2024-06-11 14:02:11.610209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806bc0 sqhd:20d0 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613304] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:34:18.973 [2024-06-11 14:02:11.613316] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.973 [2024-06-11 14:02:11.613327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x183c00 00:34:18.973 [2024-06-11 14:02:11.613337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa3f880 len:0x10000 key:0x183c00 00:34:18.973 [2024-06-11 14:02:11.613359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183c00 00:34:18.973 [2024-06-11 14:02:11.613379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183c00 00:34:18.973 [2024-06-11 14:02:11.613399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183c00 00:34:18.973 [2024-06-11 14:02:11.613418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183d00 00:34:18.973 [2024-06-11 14:02:11.613437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a82f200 len:0x10000 key:0x183d00 00:34:18.973 [2024-06-11 14:02:11.613457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183d00 00:34:18.973 [2024-06-11 14:02:11.613477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183d00 00:34:18.973 [2024-06-11 14:02:11.613495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adf0000 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.973 [2024-06-11 14:02:11.613874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183500 00:34:18.973 [2024-06-11 14:02:11.613881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.613893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183500 00:34:18.974 [2024-06-11 14:02:11.613900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.613912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183500 00:34:18.974 [2024-06-11 14:02:11.613920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.613932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183500 00:34:18.974 [2024-06-11 14:02:11.613939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.613951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183500 00:34:18.974 [2024-06-11 14:02:11.613958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.613970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183500 00:34:18.974 [2024-06-11 14:02:11.613978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.613991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183500 00:34:18.974 [2024-06-11 14:02:11.613998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183500 00:34:18.974 [2024-06-11 14:02:11.614022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183500 00:34:18.974 [2024-06-11 14:02:11.614041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183500 00:34:18.974 [2024-06-11 14:02:11.614060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183500 00:34:18.974 [2024-06-11 14:02:11.614080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183500 00:34:18.974 [2024-06-11 14:02:11.614098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x182f00 00:34:18.974 [2024-06-11 14:02:11.614367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183c00 00:34:18.974 [2024-06-11 14:02:11.614386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd59000 len:0x10000 key:0x182900 00:34:18.974 [2024-06-11 14:02:11.614404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ca8000 len:0x10000 key:0x182900 00:34:18.974 [2024-06-11 14:02:11.614427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c87000 len:0x10000 key:0x182900 00:34:18.974 [2024-06-11 14:02:11.614447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e058000 len:0x10000 key:0x182900 00:34:18.974 [2024-06-11 14:02:11.614468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e037000 len:0x10000 key:0x182900 00:34:18.974 [2024-06-11 14:02:11.614487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010fe9000 len:0x10000 key:0x182900 00:34:18.974 [2024-06-11 14:02:11.614507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001100a000 len:0x10000 key:0x182900 00:34:18.974 [2024-06-11 14:02:11.614527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b62000 len:0x10000 key:0x182900 00:34:18.974 [2024-06-11 14:02:11.614548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.614561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b41000 len:0x10000 key:0x182900 00:34:18.974 [2024-06-11 14:02:11.614568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806940 sqhd:b010 p:0 m:0 dnr:0 00:34:18.974 [2024-06-11 14:02:11.617662] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:34:18.974 [2024-06-11 14:02:11.617674] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.974 [2024-06-11 14:02:11.617685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.617984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.617991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x184100 00:34:18.975 [2024-06-11 14:02:11.618300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183200 00:34:18.975 [2024-06-11 14:02:11.618320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183200 00:34:18.975 [2024-06-11 14:02:11.618339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183200 00:34:18.975 [2024-06-11 14:02:11.618358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183200 00:34:18.975 [2024-06-11 14:02:11.618377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183200 00:34:18.975 [2024-06-11 14:02:11.618396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.975 [2024-06-11 14:02:11.618408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183200 00:34:18.976 [2024-06-11 14:02:11.618899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x184000 00:34:18.976 [2024-06-11 14:02:11.618919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.618931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x182f00 00:34:18.976 [2024-06-11 14:02:11.618938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8066c0 sqhd:e010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.622246] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:34:18.976 [2024-06-11 14:02:11.622259] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.976 [2024-06-11 14:02:11.622271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x184000 00:34:18.976 [2024-06-11 14:02:11.622278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.622297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x184000 00:34:18.976 [2024-06-11 14:02:11.622305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.622317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x184000 00:34:18.976 [2024-06-11 14:02:11.622325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.622338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x184000 00:34:18.976 [2024-06-11 14:02:11.622345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.976 [2024-06-11 14:02:11.622357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x184000 00:34:18.977 [2024-06-11 14:02:11.622365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x184000 00:34:18.977 [2024-06-11 14:02:11.622387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x184000 00:34:18.977 [2024-06-11 14:02:11.622406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x184000 00:34:18.977 [2024-06-11 14:02:11.622425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x184000 00:34:18.977 [2024-06-11 14:02:11.622445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x184000 00:34:18.977 [2024-06-11 14:02:11.622464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x184000 00:34:18.977 [2024-06-11 14:02:11.622485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x184000 00:34:18.977 [2024-06-11 14:02:11.622504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x184000 00:34:18.977 [2024-06-11 14:02:11.622524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.622984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.622991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.623003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183800 00:34:18.977 [2024-06-11 14:02:11.623010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.977 [2024-06-11 14:02:11.623028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183800 00:34:18.978 [2024-06-11 14:02:11.623035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183800 00:34:18.978 [2024-06-11 14:02:11.623056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183800 00:34:18.978 [2024-06-11 14:02:11.623075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183800 00:34:18.978 [2024-06-11 14:02:11.623094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183800 00:34:18.978 [2024-06-11 14:02:11.623114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183800 00:34:18.978 [2024-06-11 14:02:11.623132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x183900 00:34:18.978 [2024-06-11 14:02:11.623502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.623514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x184000 00:34:18.978 [2024-06-11 14:02:11.623522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806440 sqhd:5010 p:0 m:0 dnr:0 00:34:18.978 [2024-06-11 14:02:11.644775] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:34:18.978 [2024-06-11 14:02:11.644820] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.978 [2024-06-11 14:02:11.645007] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.978 [2024-06-11 14:02:11.645058] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.978 [2024-06-11 14:02:11.645089] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.978 [2024-06-11 14:02:11.645102] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.978 [2024-06-11 14:02:11.645112] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.978 [2024-06-11 14:02:11.645122] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.978 [2024-06-11 14:02:11.645132] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.978 [2024-06-11 14:02:11.645142] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.978 [2024-06-11 14:02:11.645152] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.978 [2024-06-11 14:02:11.645162] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:18.978 [2024-06-11 14:02:11.652079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:34:18.978 [2024-06-11 14:02:11.652104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:34:18.978 [2024-06-11 14:02:11.652114] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:34:18.978 task offset: 40960 on job bdev=Nvme1n1 fails 00:34:18.978 00:34:18.978 Latency(us) 00:34:18.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.978 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:18.978 Job: Nvme1n1 ended in about 2.10 seconds with error 00:34:18.978 Verification LBA range: start 0x0 length 0x400 00:34:18.978 Nvme1n1 : 2.10 136.43 8.53 30.42 0.00 379806.13 6089.39 1055566.51 00:34:18.978 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:18.978 Job: Nvme2n1 ended in about 2.10 seconds with error 00:34:18.978 Verification LBA range: start 0x0 length 0x400 00:34:18.978 Nvme2n1 : 2.10 131.61 8.23 30.41 0.00 387465.58 15291.73 1055566.51 00:34:18.978 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:18.978 Job: Nvme3n1 ended in about 2.11 seconds with error 00:34:18.978 Verification LBA range: start 0x0 length 0x400 00:34:18.978 Nvme3n1 : 2.11 136.77 8.55 30.39 0.00 372035.34 21954.56 1048576.00 00:34:18.978 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:18.978 Job: Nvme4n1 ended in about 2.11 seconds with error 00:34:18.978 Verification LBA range: start 0x0 length 0x400 00:34:18.978 Nvme4n1 : 2.11 132.42 8.28 30.38 0.00 377992.10 32112.64 1048576.00 00:34:18.979 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:18.979 Job: Nvme5n1 ended in about 2.11 seconds with error 00:34:18.979 Verification LBA range: start 0x0 length 0x400 00:34:18.979 Nvme5n1 : 2.11 121.92 7.62 30.36 0.00 399826.43 38010.88 1048576.00 00:34:18.979 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:18.979 Job: Nvme6n1 ended in about 2.06 seconds with error 00:34:18.979 Verification LBA range: start 0x0 length 0x400 00:34:18.979 Nvme6n1 : 2.06 124.03 7.75 31.01 0.00 390430.38 40195.41 1139452.59 00:34:18.979 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:18.979 Job: Nvme7n1 ended in about 2.07 seconds with error 00:34:18.979 Verification LBA range: start 0x0 length 0x400 00:34:18.979 Nvme7n1 : 2.07 123.76 7.74 30.94 0.00 387346.77 42161.49 1125471.57 00:34:18.979 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:18.979 Job: Nvme8n1 ended in about 2.07 seconds with error 00:34:18.979 Verification LBA range: start 0x0 length 0x400 00:34:18.979 Nvme8n1 : 2.07 123.50 7.72 30.88 0.00 384199.68 43690.67 1111490.56 00:34:18.979 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:18.979 Job: Nvme9n1 ended in about 2.08 seconds with error 00:34:18.979 Verification LBA range: start 0x0 length 0x400 00:34:18.979 Nvme9n1 : 2.08 123.24 7.70 30.81 0.00 381060.44 46967.47 1097509.55 00:34:18.979 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:34:18.979 Job: Nvme10n1 ended in about 2.08 seconds with error 00:34:18.979 Verification LBA range: start 0x0 length 0x400 00:34:18.979 Nvme10n1 : 2.08 61.49 3.84 30.74 0.00 630013.72 47622.83 1083528.53 00:34:18.979 =================================================================================================================== 00:34:18.979 Total : 1215.18 75.95 306.34 0.00 399095.69 6089.39 1139452.59 00:34:18.979 [2024-06-11 14:02:11.674788] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:18.979 [2024-06-11 14:02:11.674810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:34:18.979 [2024-06-11 14:02:11.676186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:34:18.979 [2024-06-11 14:02:11.676200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:34:18.979 [2024-06-11 14:02:11.676209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:34:18.979 [2024-06-11 14:02:11.676218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:34:18.979 [2024-06-11 14:02:11.676226] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:18.979 [2024-06-11 14:02:11.676235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:34:18.979 [2024-06-11 14:02:11.695316] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:18.979 [2024-06-11 14:02:11.695337] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:18.979 [2024-06-11 14:02:11.695349] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:34:18.979 [2024-06-11 14:02:11.695606] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:18.979 [2024-06-11 14:02:11.695615] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:18.979 [2024-06-11 14:02:11.695621] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:34:18.979 [2024-06-11 14:02:11.695784] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:18.979 [2024-06-11 14:02:11.695793] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:18.979 [2024-06-11 14:02:11.695798] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:34:18.979 [2024-06-11 14:02:11.695960] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:18.979 [2024-06-11 14:02:11.695969] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:18.979 [2024-06-11 14:02:11.695974] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:34:18.979 [2024-06-11 14:02:11.696220] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:18.979 [2024-06-11 14:02:11.696230] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:18.979 [2024-06-11 14:02:11.696235] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:34:18.979 [2024-06-11 14:02:11.696382] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:18.979 [2024-06-11 14:02:11.696390] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:18.979 [2024-06-11 14:02:11.696396] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:34:18.979 [2024-06-11 14:02:11.696552] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:18.979 [2024-06-11 14:02:11.696560] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:18.979 [2024-06-11 14:02:11.696565] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:34:18.979 [2024-06-11 14:02:11.696709] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:18.979 [2024-06-11 14:02:11.696717] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:18.979 [2024-06-11 14:02:11.696722] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:34:18.979 [2024-06-11 14:02:11.696869] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:18.979 [2024-06-11 14:02:11.696877] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:18.979 [2024-06-11 14:02:11.696883] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:34:18.979 [2024-06-11 14:02:11.697077] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:34:18.979 [2024-06-11 14:02:11.697087] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:34:18.979 [2024-06-11 14:02:11.697093] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2307423 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:34:18.979 rmmod nvme_rdma 00:34:18.979 rmmod nvme_fabrics 00:34:18.979 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 2307423 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:34:18.979 00:34:18.979 real 0m5.129s 00:34:18.979 user 0m17.554s 00:34:18.979 sys 0m1.028s 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:18.979 14:02:11 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:34:18.979 ************************************ 00:34:18.979 END TEST nvmf_shutdown_tc3 00:34:18.979 ************************************ 00:34:19.240 14:02:11 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:34:19.240 00:34:19.240 real 0m24.904s 00:34:19.240 user 1m10.557s 00:34:19.240 sys 0m8.538s 00:34:19.240 14:02:11 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:19.240 14:02:11 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:19.240 ************************************ 00:34:19.240 END TEST nvmf_shutdown 00:34:19.240 ************************************ 00:34:19.240 14:02:11 nvmf_rdma -- nvmf/nvmf.sh@85 -- # timing_exit target 00:34:19.240 14:02:11 nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:19.240 14:02:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:19.240 14:02:11 nvmf_rdma -- nvmf/nvmf.sh@87 -- # timing_enter host 00:34:19.240 14:02:11 nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:19.240 14:02:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:19.240 14:02:11 nvmf_rdma -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:34:19.240 14:02:11 nvmf_rdma -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:34:19.240 14:02:11 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:19.240 14:02:11 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:19.240 14:02:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:19.240 ************************************ 00:34:19.240 START TEST nvmf_multicontroller 00:34:19.240 ************************************ 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:34:19.240 * Looking for test storage... 00:34:19.240 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:19.240 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:19.241 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.241 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.241 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.241 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.241 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:19.241 14:02:12 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.241 14:02:12 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.241 14:02:12 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:34:19.501 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:34:19.501 00:34:19.501 real 0m0.134s 00:34:19.501 user 0m0.063s 00:34:19.501 sys 0m0.079s 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:19.501 14:02:12 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:34:19.501 ************************************ 00:34:19.501 END TEST nvmf_multicontroller 00:34:19.501 ************************************ 00:34:19.501 14:02:12 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:34:19.501 14:02:12 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:19.501 14:02:12 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:19.501 14:02:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:19.501 ************************************ 00:34:19.501 START TEST nvmf_aer 00:34:19.501 ************************************ 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:34:19.501 * Looking for test storage... 00:34:19.501 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:19.501 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:34:19.502 14:02:12 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:27.642 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:27.642 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:34:27.642 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:27.642 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:34:27.643 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:34:27.643 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:34:27.643 Found net devices under 0000:98:00.0: mlx_0_0 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:34:27.643 Found net devices under 0000:98:00.1: mlx_0_1 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:34:27.643 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:27.643 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:34:27.643 altname enp152s0f0np0 00:34:27.643 altname ens817f0np0 00:34:27.643 inet 192.168.100.8/24 scope global mlx_0_0 00:34:27.643 valid_lft forever preferred_lft forever 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:34:27.643 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:27.643 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:34:27.643 altname enp152s0f1np1 00:34:27.643 altname ens817f1np1 00:34:27.643 inet 192.168.100.9/24 scope global mlx_0_1 00:34:27.643 valid_lft forever preferred_lft forever 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:34:27.643 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:34:27.644 192.168.100.9' 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:34:27.644 192.168.100.9' 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:34:27.644 192.168.100.9' 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2311920 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2311920 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 2311920 ']' 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:27.644 14:02:19 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:27.644 [2024-06-11 14:02:19.689845] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:27.644 [2024-06-11 14:02:19.689899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:27.644 EAL: No free 2048 kB hugepages reported on node 1 00:34:27.644 [2024-06-11 14:02:19.752125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:27.644 [2024-06-11 14:02:19.820420] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:27.644 [2024-06-11 14:02:19.820456] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:27.644 [2024-06-11 14:02:19.820463] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:27.644 [2024-06-11 14:02:19.820470] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:27.644 [2024-06-11 14:02:19.820476] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:27.644 [2024-06-11 14:02:19.820611] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.644 [2024-06-11 14:02:19.820731] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:27.644 [2024-06-11 14:02:19.820885] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.644 [2024-06-11 14:02:19.820886] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:27.644 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:27.644 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:34:27.644 14:02:20 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:27.644 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:27.644 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:27.644 14:02:20 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:27.644 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:34:27.644 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.644 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:27.644 [2024-06-11 14:02:20.537001] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10eae90/0x10ef380) succeed. 00:34:27.644 [2024-06-11 14:02:20.550193] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10ec4d0/0x1130a10) succeed. 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:27.906 Malloc0 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:27.906 [2024-06-11 14:02:20.722314] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:27.906 [ 00:34:27.906 { 00:34:27.906 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:27.906 "subtype": "Discovery", 00:34:27.906 "listen_addresses": [], 00:34:27.906 "allow_any_host": true, 00:34:27.906 "hosts": [] 00:34:27.906 }, 00:34:27.906 { 00:34:27.906 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:27.906 "subtype": "NVMe", 00:34:27.906 "listen_addresses": [ 00:34:27.906 { 00:34:27.906 "trtype": "RDMA", 00:34:27.906 "adrfam": "IPv4", 00:34:27.906 "traddr": "192.168.100.8", 00:34:27.906 "trsvcid": "4420" 00:34:27.906 } 00:34:27.906 ], 00:34:27.906 "allow_any_host": true, 00:34:27.906 "hosts": [], 00:34:27.906 "serial_number": "SPDK00000000000001", 00:34:27.906 "model_number": "SPDK bdev Controller", 00:34:27.906 "max_namespaces": 2, 00:34:27.906 "min_cntlid": 1, 00:34:27.906 "max_cntlid": 65519, 00:34:27.906 "namespaces": [ 00:34:27.906 { 00:34:27.906 "nsid": 1, 00:34:27.906 "bdev_name": "Malloc0", 00:34:27.906 "name": "Malloc0", 00:34:27.906 "nguid": "85F8CF4561FB404BAA6E618BAC04A2D0", 00:34:27.906 "uuid": "85f8cf45-61fb-404b-aa6e-618bac04a2d0" 00:34:27.906 } 00:34:27.906 ] 00:34:27.906 } 00:34:27.906 ] 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=2312125 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:34:27.906 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:34:27.906 EAL: No free 2048 kB hugepages reported on node 1 00:34:28.167 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:28.167 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:34:28.167 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:34:28.167 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:34:28.167 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:28.168 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:34:28.168 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:34:28.168 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:34:28.168 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.168 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:28.168 Malloc1 00:34:28.168 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.168 14:02:20 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:34:28.168 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.168 14:02:20 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:28.168 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.168 14:02:21 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:34:28.168 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.168 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:28.168 [ 00:34:28.168 { 00:34:28.168 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:28.168 "subtype": "Discovery", 00:34:28.168 "listen_addresses": [], 00:34:28.168 "allow_any_host": true, 00:34:28.168 "hosts": [] 00:34:28.168 }, 00:34:28.168 { 00:34:28.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:28.168 "subtype": "NVMe", 00:34:28.168 "listen_addresses": [ 00:34:28.168 { 00:34:28.168 "trtype": "RDMA", 00:34:28.168 "adrfam": "IPv4", 00:34:28.168 "traddr": "192.168.100.8", 00:34:28.168 "trsvcid": "4420" 00:34:28.168 } 00:34:28.168 ], 00:34:28.168 "allow_any_host": true, 00:34:28.168 "hosts": [], 00:34:28.168 "serial_number": "SPDK00000000000001", 00:34:28.168 "model_number": "SPDK bdev Controller", 00:34:28.168 "max_namespaces": 2, 00:34:28.168 "min_cntlid": 1, 00:34:28.168 "max_cntlid": 65519, 00:34:28.168 "namespaces": [ 00:34:28.168 { 00:34:28.168 "nsid": 1, 00:34:28.168 "bdev_name": "Malloc0", 00:34:28.168 "name": "Malloc0", 00:34:28.168 "nguid": "85F8CF4561FB404BAA6E618BAC04A2D0", 00:34:28.168 "uuid": "85f8cf45-61fb-404b-aa6e-618bac04a2d0" 00:34:28.168 }, 00:34:28.168 { 00:34:28.168 "nsid": 2, 00:34:28.168 "bdev_name": "Malloc1", 00:34:28.168 "name": "Malloc1", 00:34:28.168 "nguid": "3C5ED5E88E6F4D2796F582CAEA6C9218", 00:34:28.168 "uuid": "3c5ed5e8-8e6f-4d27-96f5-82caea6c9218" 00:34:28.168 } 00:34:28.168 ] 00:34:28.168 } 00:34:28.168 ] 00:34:28.168 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.168 14:02:21 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 2312125 00:34:28.168 Asynchronous Event Request test 00:34:28.168 Attaching to 192.168.100.8 00:34:28.168 Attached to 192.168.100.8 00:34:28.168 Registering asynchronous event callbacks... 00:34:28.168 Starting namespace attribute notice tests for all controllers... 00:34:28.168 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:34:28.168 aer_cb - Changed Namespace 00:34:28.168 Cleaning up... 00:34:28.168 14:02:21 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:34:28.168 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.168 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:34:28.429 rmmod nvme_rdma 00:34:28.429 rmmod nvme_fabrics 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2311920 ']' 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2311920 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 2311920 ']' 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 2311920 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2311920 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2311920' 00:34:28.429 killing process with pid 2311920 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@968 -- # kill 2311920 00:34:28.429 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@973 -- # wait 2311920 00:34:28.690 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:28.690 14:02:21 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:34:28.690 00:34:28.690 real 0m9.197s 00:34:28.690 user 0m8.480s 00:34:28.690 sys 0m5.855s 00:34:28.690 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:28.690 14:02:21 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:34:28.690 ************************************ 00:34:28.690 END TEST nvmf_aer 00:34:28.690 ************************************ 00:34:28.690 14:02:21 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:34:28.690 14:02:21 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:28.690 14:02:21 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:28.690 14:02:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:28.690 ************************************ 00:34:28.690 START TEST nvmf_async_init 00:34:28.690 ************************************ 00:34:28.690 14:02:21 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:34:28.690 * Looking for test storage... 00:34:28.951 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:28.951 14:02:21 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8e23bba83365411a9d596c75eb968453 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:34:28.952 14:02:21 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.172 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:37.172 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:34:37.172 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:37.172 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:37.172 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:34:37.173 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:34:37.173 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:34:37.173 Found net devices under 0000:98:00.0: mlx_0_0 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:34:37.173 Found net devices under 0000:98:00.1: mlx_0_1 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:37.173 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:34:37.174 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:37.174 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:34:37.174 altname enp152s0f0np0 00:34:37.174 altname ens817f0np0 00:34:37.174 inet 192.168.100.8/24 scope global mlx_0_0 00:34:37.174 valid_lft forever preferred_lft forever 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:34:37.174 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:37.174 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:34:37.174 altname enp152s0f1np1 00:34:37.174 altname ens817f1np1 00:34:37.174 inet 192.168.100.9/24 scope global mlx_0_1 00:34:37.174 valid_lft forever preferred_lft forever 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:34:37.174 192.168.100.9' 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:34:37.174 192.168.100.9' 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:34:37.174 192.168.100.9' 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:34:37.174 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2315996 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2315996 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 2315996 ']' 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:37.175 14:02:28 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.175 [2024-06-11 14:02:28.966233] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:37.175 [2024-06-11 14:02:28.966286] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:37.175 EAL: No free 2048 kB hugepages reported on node 1 00:34:37.175 [2024-06-11 14:02:29.028767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.175 [2024-06-11 14:02:29.095817] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:37.175 [2024-06-11 14:02:29.095855] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:37.175 [2024-06-11 14:02:29.095862] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:37.175 [2024-06-11 14:02:29.095868] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:37.175 [2024-06-11 14:02:29.095874] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:37.175 [2024-06-11 14:02:29.095900] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.175 [2024-06-11 14:02:29.817887] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1efcc40/0x1f01130) succeed. 00:34:37.175 [2024-06-11 14:02:29.830027] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1efe140/0x1f427c0) succeed. 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.175 null0 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8e23bba83365411a9d596c75eb968453 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.175 [2024-06-11 14:02:29.927273] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.175 14:02:29 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.175 nvme0n1 00:34:37.175 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.175 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:37.176 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.176 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.176 [ 00:34:37.176 { 00:34:37.176 "name": "nvme0n1", 00:34:37.176 "aliases": [ 00:34:37.176 "8e23bba8-3365-411a-9d59-6c75eb968453" 00:34:37.176 ], 00:34:37.176 "product_name": "NVMe disk", 00:34:37.176 "block_size": 512, 00:34:37.176 "num_blocks": 2097152, 00:34:37.176 "uuid": "8e23bba8-3365-411a-9d59-6c75eb968453", 00:34:37.176 "assigned_rate_limits": { 00:34:37.176 "rw_ios_per_sec": 0, 00:34:37.176 "rw_mbytes_per_sec": 0, 00:34:37.176 "r_mbytes_per_sec": 0, 00:34:37.176 "w_mbytes_per_sec": 0 00:34:37.176 }, 00:34:37.176 "claimed": false, 00:34:37.176 "zoned": false, 00:34:37.176 "supported_io_types": { 00:34:37.176 "read": true, 00:34:37.176 "write": true, 00:34:37.176 "unmap": false, 00:34:37.176 "write_zeroes": true, 00:34:37.176 "flush": true, 00:34:37.176 "reset": true, 00:34:37.176 "compare": true, 00:34:37.176 "compare_and_write": true, 00:34:37.176 "abort": true, 00:34:37.176 "nvme_admin": true, 00:34:37.176 "nvme_io": true 00:34:37.176 }, 00:34:37.176 "memory_domains": [ 00:34:37.176 { 00:34:37.176 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:34:37.176 "dma_device_type": 0 00:34:37.176 } 00:34:37.176 ], 00:34:37.176 "driver_specific": { 00:34:37.176 "nvme": [ 00:34:37.176 { 00:34:37.176 "trid": { 00:34:37.176 "trtype": "RDMA", 00:34:37.176 "adrfam": "IPv4", 00:34:37.176 "traddr": "192.168.100.8", 00:34:37.176 "trsvcid": "4420", 00:34:37.176 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:37.176 }, 00:34:37.176 "ctrlr_data": { 00:34:37.176 "cntlid": 1, 00:34:37.176 "vendor_id": "0x8086", 00:34:37.176 "model_number": "SPDK bdev Controller", 00:34:37.176 "serial_number": "00000000000000000000", 00:34:37.176 "firmware_revision": "24.09", 00:34:37.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:37.176 "oacs": { 00:34:37.176 "security": 0, 00:34:37.176 "format": 0, 00:34:37.176 "firmware": 0, 00:34:37.176 "ns_manage": 0 00:34:37.176 }, 00:34:37.176 "multi_ctrlr": true, 00:34:37.176 "ana_reporting": false 00:34:37.176 }, 00:34:37.176 "vs": { 00:34:37.176 "nvme_version": "1.3" 00:34:37.176 }, 00:34:37.176 "ns_data": { 00:34:37.176 "id": 1, 00:34:37.176 "can_share": true 00:34:37.176 } 00:34:37.176 } 00:34:37.176 ], 00:34:37.176 "mp_policy": "active_passive" 00:34:37.176 } 00:34:37.176 } 00:34:37.176 ] 00:34:37.176 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.176 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:34:37.176 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.176 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.176 [2024-06-11 14:02:30.054258] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:37.438 [2024-06-11 14:02:30.084365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:34:37.438 [2024-06-11 14:02:30.110312] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.438 [ 00:34:37.438 { 00:34:37.438 "name": "nvme0n1", 00:34:37.438 "aliases": [ 00:34:37.438 "8e23bba8-3365-411a-9d59-6c75eb968453" 00:34:37.438 ], 00:34:37.438 "product_name": "NVMe disk", 00:34:37.438 "block_size": 512, 00:34:37.438 "num_blocks": 2097152, 00:34:37.438 "uuid": "8e23bba8-3365-411a-9d59-6c75eb968453", 00:34:37.438 "assigned_rate_limits": { 00:34:37.438 "rw_ios_per_sec": 0, 00:34:37.438 "rw_mbytes_per_sec": 0, 00:34:37.438 "r_mbytes_per_sec": 0, 00:34:37.438 "w_mbytes_per_sec": 0 00:34:37.438 }, 00:34:37.438 "claimed": false, 00:34:37.438 "zoned": false, 00:34:37.438 "supported_io_types": { 00:34:37.438 "read": true, 00:34:37.438 "write": true, 00:34:37.438 "unmap": false, 00:34:37.438 "write_zeroes": true, 00:34:37.438 "flush": true, 00:34:37.438 "reset": true, 00:34:37.438 "compare": true, 00:34:37.438 "compare_and_write": true, 00:34:37.438 "abort": true, 00:34:37.438 "nvme_admin": true, 00:34:37.438 "nvme_io": true 00:34:37.438 }, 00:34:37.438 "memory_domains": [ 00:34:37.438 { 00:34:37.438 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:34:37.438 "dma_device_type": 0 00:34:37.438 } 00:34:37.438 ], 00:34:37.438 "driver_specific": { 00:34:37.438 "nvme": [ 00:34:37.438 { 00:34:37.438 "trid": { 00:34:37.438 "trtype": "RDMA", 00:34:37.438 "adrfam": "IPv4", 00:34:37.438 "traddr": "192.168.100.8", 00:34:37.438 "trsvcid": "4420", 00:34:37.438 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:37.438 }, 00:34:37.438 "ctrlr_data": { 00:34:37.438 "cntlid": 2, 00:34:37.438 "vendor_id": "0x8086", 00:34:37.438 "model_number": "SPDK bdev Controller", 00:34:37.438 "serial_number": "00000000000000000000", 00:34:37.438 "firmware_revision": "24.09", 00:34:37.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:37.438 "oacs": { 00:34:37.438 "security": 0, 00:34:37.438 "format": 0, 00:34:37.438 "firmware": 0, 00:34:37.438 "ns_manage": 0 00:34:37.438 }, 00:34:37.438 "multi_ctrlr": true, 00:34:37.438 "ana_reporting": false 00:34:37.438 }, 00:34:37.438 "vs": { 00:34:37.438 "nvme_version": "1.3" 00:34:37.438 }, 00:34:37.438 "ns_data": { 00:34:37.438 "id": 1, 00:34:37.438 "can_share": true 00:34:37.438 } 00:34:37.438 } 00:34:37.438 ], 00:34:37.438 "mp_policy": "active_passive" 00:34:37.438 } 00:34:37.438 } 00:34:37.438 ] 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.HRp1X8F0tB 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.HRp1X8F0tB 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.438 [2024-06-11 14:02:30.195102] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HRp1X8F0tB 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HRp1X8F0tB 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.438 [2024-06-11 14:02:30.215132] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:37.438 nvme0n1 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.438 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.438 [ 00:34:37.438 { 00:34:37.438 "name": "nvme0n1", 00:34:37.438 "aliases": [ 00:34:37.438 "8e23bba8-3365-411a-9d59-6c75eb968453" 00:34:37.438 ], 00:34:37.438 "product_name": "NVMe disk", 00:34:37.438 "block_size": 512, 00:34:37.438 "num_blocks": 2097152, 00:34:37.438 "uuid": "8e23bba8-3365-411a-9d59-6c75eb968453", 00:34:37.439 "assigned_rate_limits": { 00:34:37.439 "rw_ios_per_sec": 0, 00:34:37.439 "rw_mbytes_per_sec": 0, 00:34:37.439 "r_mbytes_per_sec": 0, 00:34:37.439 "w_mbytes_per_sec": 0 00:34:37.439 }, 00:34:37.439 "claimed": false, 00:34:37.439 "zoned": false, 00:34:37.439 "supported_io_types": { 00:34:37.439 "read": true, 00:34:37.439 "write": true, 00:34:37.439 "unmap": false, 00:34:37.439 "write_zeroes": true, 00:34:37.439 "flush": true, 00:34:37.439 "reset": true, 00:34:37.439 "compare": true, 00:34:37.439 "compare_and_write": true, 00:34:37.439 "abort": true, 00:34:37.439 "nvme_admin": true, 00:34:37.439 "nvme_io": true 00:34:37.439 }, 00:34:37.439 "memory_domains": [ 00:34:37.439 { 00:34:37.439 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:34:37.439 "dma_device_type": 0 00:34:37.439 } 00:34:37.439 ], 00:34:37.439 "driver_specific": { 00:34:37.439 "nvme": [ 00:34:37.439 { 00:34:37.439 "trid": { 00:34:37.439 "trtype": "RDMA", 00:34:37.439 "adrfam": "IPv4", 00:34:37.439 "traddr": "192.168.100.8", 00:34:37.439 "trsvcid": "4421", 00:34:37.439 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:37.439 }, 00:34:37.439 "ctrlr_data": { 00:34:37.439 "cntlid": 3, 00:34:37.439 "vendor_id": "0x8086", 00:34:37.439 "model_number": "SPDK bdev Controller", 00:34:37.439 "serial_number": "00000000000000000000", 00:34:37.439 "firmware_revision": "24.09", 00:34:37.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:37.439 "oacs": { 00:34:37.439 "security": 0, 00:34:37.439 "format": 0, 00:34:37.439 "firmware": 0, 00:34:37.439 "ns_manage": 0 00:34:37.439 }, 00:34:37.439 "multi_ctrlr": true, 00:34:37.439 "ana_reporting": false 00:34:37.439 }, 00:34:37.439 "vs": { 00:34:37.439 "nvme_version": "1.3" 00:34:37.439 }, 00:34:37.439 "ns_data": { 00:34:37.439 "id": 1, 00:34:37.439 "can_share": true 00:34:37.439 } 00:34:37.439 } 00:34:37.439 ], 00:34:37.439 "mp_policy": "active_passive" 00:34:37.439 } 00:34:37.439 } 00:34:37.439 ] 00:34:37.439 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.439 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.439 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:37.439 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.439 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:37.439 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.HRp1X8F0tB 00:34:37.439 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:34:37.699 rmmod nvme_rdma 00:34:37.699 rmmod nvme_fabrics 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2315996 ']' 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2315996 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 2315996 ']' 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 2315996 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2315996 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2315996' 00:34:37.699 killing process with pid 2315996 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 2315996 00:34:37.699 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 2315996 00:34:37.961 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:37.961 14:02:30 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:34:37.961 00:34:37.961 real 0m9.119s 00:34:37.961 user 0m3.868s 00:34:37.961 sys 0m5.800s 00:34:37.961 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:37.961 14:02:30 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:34:37.961 ************************************ 00:34:37.961 END TEST nvmf_async_init 00:34:37.961 ************************************ 00:34:37.961 14:02:30 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:34:37.961 14:02:30 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:37.961 14:02:30 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:37.961 14:02:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:34:37.961 ************************************ 00:34:37.961 START TEST dma 00:34:37.961 ************************************ 00:34:37.961 14:02:30 nvmf_rdma.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:34:37.961 * Looking for test storage... 00:34:37.961 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:37.961 14:02:30 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:37.961 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:37.961 14:02:30 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:37.961 14:02:30 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.961 14:02:30 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.962 14:02:30 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.962 14:02:30 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.962 14:02:30 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.962 14:02:30 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:34:37.962 14:02:30 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:37.962 14:02:30 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:34:37.962 14:02:30 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:34:37.962 14:02:30 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:34:37.962 14:02:30 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:34:37.962 14:02:30 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.962 14:02:30 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:37.962 14:02:30 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:37.962 14:02:30 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:34:37.962 14:02:30 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:34:46.111 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:34:46.111 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.111 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:34:46.112 Found net devices under 0000:98:00.0: mlx_0_0 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:34:46.112 Found net devices under 0000:98:00.1: mlx_0_1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:34:46.112 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:46.112 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:34:46.112 altname enp152s0f0np0 00:34:46.112 altname ens817f0np0 00:34:46.112 inet 192.168.100.8/24 scope global mlx_0_0 00:34:46.112 valid_lft forever preferred_lft forever 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:34:46.112 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:46.112 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:34:46.112 altname enp152s0f1np1 00:34:46.112 altname ens817f1np1 00:34:46.112 inet 192.168.100.9/24 scope global mlx_0_1 00:34:46.112 valid_lft forever preferred_lft forever 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:34:46.112 192.168.100.9' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:34:46.112 192.168.100.9' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:34:46.112 192.168.100.9' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:34:46.112 14:02:37 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:46.112 14:02:37 nvmf_rdma.dma -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:46.112 14:02:37 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=2320060 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 2320060 00:34:46.112 14:02:37 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:46.112 14:02:37 nvmf_rdma.dma -- common/autotest_common.sh@830 -- # '[' -z 2320060 ']' 00:34:46.112 14:02:37 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.112 14:02:37 nvmf_rdma.dma -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:46.112 14:02:37 nvmf_rdma.dma -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.112 14:02:37 nvmf_rdma.dma -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:46.112 14:02:37 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:34:46.113 [2024-06-11 14:02:37.940054] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:46.113 [2024-06-11 14:02:37.940107] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:46.113 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.113 [2024-06-11 14:02:38.001467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:46.113 [2024-06-11 14:02:38.067208] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:46.113 [2024-06-11 14:02:38.067242] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:46.113 [2024-06-11 14:02:38.067249] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:46.113 [2024-06-11 14:02:38.067256] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:46.113 [2024-06-11 14:02:38.067261] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:46.113 [2024-06-11 14:02:38.067404] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.113 [2024-06-11 14:02:38.067406] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@863 -- # return 0 00:34:46.113 14:02:38 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:34:46.113 14:02:38 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:46.113 14:02:38 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:34:46.113 [2024-06-11 14:02:38.790235] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22a77b0/0x22abca0) succeed. 00:34:46.113 [2024-06-11 14:02:38.802426] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22a8cb0/0x22ed330) succeed. 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:46.113 14:02:38 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:34:46.113 Malloc0 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:46.113 14:02:38 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:46.113 14:02:38 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:46.113 14:02:38 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:34:46.113 [2024-06-11 14:02:38.957203] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:34:46.113 14:02:38 nvmf_rdma.dma -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:46.113 14:02:38 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:34:46.113 14:02:38 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:34:46.113 14:02:38 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:34:46.113 14:02:38 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:34:46.113 14:02:38 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:46.113 14:02:38 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:46.113 { 00:34:46.113 "params": { 00:34:46.113 "name": "Nvme$subsystem", 00:34:46.113 "trtype": "$TEST_TRANSPORT", 00:34:46.113 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:46.113 "adrfam": "ipv4", 00:34:46.113 "trsvcid": "$NVMF_PORT", 00:34:46.113 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:46.113 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:46.113 "hdgst": ${hdgst:-false}, 00:34:46.113 "ddgst": ${ddgst:-false} 00:34:46.113 }, 00:34:46.113 "method": "bdev_nvme_attach_controller" 00:34:46.113 } 00:34:46.113 EOF 00:34:46.113 )") 00:34:46.113 14:02:38 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:34:46.113 14:02:38 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:34:46.113 14:02:38 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:34:46.113 14:02:38 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:46.113 "params": { 00:34:46.113 "name": "Nvme0", 00:34:46.113 "trtype": "rdma", 00:34:46.113 "traddr": "192.168.100.8", 00:34:46.113 "adrfam": "ipv4", 00:34:46.113 "trsvcid": "4420", 00:34:46.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:46.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:46.113 "hdgst": false, 00:34:46.113 "ddgst": false 00:34:46.113 }, 00:34:46.113 "method": "bdev_nvme_attach_controller" 00:34:46.113 }' 00:34:46.113 [2024-06-11 14:02:39.006145] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:46.113 [2024-06-11 14:02:39.006194] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320254 ] 00:34:46.373 EAL: No free 2048 kB hugepages reported on node 1 00:34:46.373 [2024-06-11 14:02:39.056788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:46.373 [2024-06-11 14:02:39.111917] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:46.373 [2024-06-11 14:02:39.111917] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:51.664 bdev Nvme0n1 reports 1 memory domains 00:34:51.664 bdev Nvme0n1 supports RDMA memory domain 00:34:51.664 Initialization complete, running randrw IO for 5 sec on 2 cores 00:34:51.664 ========================================================================== 00:34:51.664 Latency [us] 00:34:51.664 IOPS MiB/s Average min max 00:34:51.664 Core 2: 23772.14 92.86 672.57 279.67 8970.74 00:34:51.664 Core 3: 27023.23 105.56 591.49 201.77 8884.88 00:34:51.664 ========================================================================== 00:34:51.664 Total : 50795.36 198.42 629.43 201.77 8970.74 00:34:51.664 00:34:51.664 Total operations: 254001, translate 254001 pull_push 0 memzero 0 00:34:51.665 14:02:44 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:34:51.665 14:02:44 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:34:51.665 14:02:44 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:34:51.665 [2024-06-11 14:02:44.473545] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:51.665 [2024-06-11 14:02:44.473604] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2321323 ] 00:34:51.665 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.665 [2024-06-11 14:02:44.524260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:51.925 [2024-06-11 14:02:44.576701] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:51.925 [2024-06-11 14:02:44.576701] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:34:57.213 bdev Malloc0 reports 2 memory domains 00:34:57.213 bdev Malloc0 doesn't support RDMA memory domain 00:34:57.213 Initialization complete, running randrw IO for 5 sec on 2 cores 00:34:57.213 ========================================================================== 00:34:57.213 Latency [us] 00:34:57.213 IOPS MiB/s Average min max 00:34:57.213 Core 2: 18883.90 73.77 846.74 310.74 1379.84 00:34:57.213 Core 3: 18898.70 73.82 846.07 312.07 1380.56 00:34:57.213 ========================================================================== 00:34:57.213 Total : 37782.61 147.59 846.40 310.74 1380.56 00:34:57.213 00:34:57.213 Total operations: 188966, translate 0 pull_push 755864 memzero 0 00:34:57.213 14:02:49 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:34:57.213 14:02:49 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:34:57.213 14:02:49 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:34:57.213 14:02:49 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:34:57.213 Ignoring -M option 00:34:57.213 [2024-06-11 14:02:49.828254] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:34:57.213 [2024-06-11 14:02:49.828310] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2322360 ] 00:34:57.213 EAL: No free 2048 kB hugepages reported on node 1 00:34:57.213 [2024-06-11 14:02:49.879540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:57.213 [2024-06-11 14:02:49.930691] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:34:57.213 [2024-06-11 14:02:49.930691] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:02.504 bdev 79c040b0-ea9b-4462-83fc-d5ba5af7a0d9 reports 1 memory domains 00:35:02.505 bdev 79c040b0-ea9b-4462-83fc-d5ba5af7a0d9 supports RDMA memory domain 00:35:02.505 Initialization complete, running randread IO for 5 sec on 2 cores 00:35:02.505 ========================================================================== 00:35:02.505 Latency [us] 00:35:02.505 IOPS MiB/s Average min max 00:35:02.505 Core 2: 116566.12 455.34 136.73 60.26 3737.21 00:35:02.505 Core 3: 121731.76 475.51 130.93 54.70 3841.17 00:35:02.505 ========================================================================== 00:35:02.505 Total : 238297.88 930.85 133.77 54.70 3841.17 00:35:02.505 00:35:02.505 Total operations: 1191574, translate 0 pull_push 0 memzero 1191574 00:35:02.505 14:02:55 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:35:02.505 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.505 [2024-06-11 14:02:55.405745] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:05.050 Initializing NVMe Controllers 00:35:05.050 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:35:05.050 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:35:05.050 Initialization complete. Launching workers. 00:35:05.050 ======================================================== 00:35:05.050 Latency(us) 00:35:05.050 Device Information : IOPS MiB/s Average min max 00:35:05.050 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7980.67 5980.02 9977.56 00:35:05.050 ======================================================== 00:35:05.050 Total : 2016.00 7.88 7980.67 5980.02 9977.56 00:35:05.050 00:35:05.050 14:02:57 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:35:05.050 14:02:57 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:35:05.050 14:02:57 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:35:05.050 14:02:57 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:35:05.050 [2024-06-11 14:02:57.792878] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:35:05.050 [2024-06-11 14:02:57.792931] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323762 ] 00:35:05.050 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.050 [2024-06-11 14:02:57.844175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:05.050 [2024-06-11 14:02:57.895765] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.050 [2024-06-11 14:02:57.895766] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:10.340 bdev 851bc330-4f18-4313-9d6d-5ccabb986b39 reports 1 memory domains 00:35:10.340 bdev 851bc330-4f18-4313-9d6d-5ccabb986b39 supports RDMA memory domain 00:35:10.340 Initialization complete, running randrw IO for 5 sec on 2 cores 00:35:10.340 ========================================================================== 00:35:10.340 Latency [us] 00:35:10.340 IOPS MiB/s Average min max 00:35:10.340 Core 2: 20999.07 82.03 761.42 11.35 14979.09 00:35:10.340 Core 3: 27124.75 105.96 589.35 7.49 14248.05 00:35:10.340 ========================================================================== 00:35:10.340 Total : 48123.82 187.98 664.43 7.49 14979.09 00:35:10.340 00:35:10.340 Total operations: 240655, translate 240513 pull_push 0 memzero 142 00:35:10.600 14:03:03 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:35:10.600 14:03:03 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:35:10.600 rmmod nvme_rdma 00:35:10.600 rmmod nvme_fabrics 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 2320060 ']' 00:35:10.600 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 2320060 00:35:10.600 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@949 -- # '[' -z 2320060 ']' 00:35:10.600 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # kill -0 2320060 00:35:10.600 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # uname 00:35:10.600 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:10.600 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2320060 00:35:10.600 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:10.600 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:10.600 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2320060' 00:35:10.600 killing process with pid 2320060 00:35:10.600 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@968 -- # kill 2320060 00:35:10.600 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@973 -- # wait 2320060 00:35:10.861 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:10.861 14:03:03 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:35:10.861 00:35:10.861 real 0m32.901s 00:35:10.861 user 1m35.329s 00:35:10.861 sys 0m6.124s 00:35:10.861 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:10.861 14:03:03 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:35:10.861 ************************************ 00:35:10.861 END TEST dma 00:35:10.861 ************************************ 00:35:10.861 14:03:03 nvmf_rdma -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:35:10.861 14:03:03 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:10.861 14:03:03 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:10.861 14:03:03 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:10.861 ************************************ 00:35:10.861 START TEST nvmf_identify 00:35:10.861 ************************************ 00:35:10.861 14:03:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:35:10.861 * Looking for test storage... 00:35:11.122 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:35:11.122 14:03:03 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:35:11.123 14:03:03 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:35:19.260 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:35:19.260 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:19.260 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:35:19.261 Found net devices under 0000:98:00.0: mlx_0_0 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:35:19.261 Found net devices under 0000:98:00.1: mlx_0_1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:35:19.261 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:19.261 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:35:19.261 altname enp152s0f0np0 00:35:19.261 altname ens817f0np0 00:35:19.261 inet 192.168.100.8/24 scope global mlx_0_0 00:35:19.261 valid_lft forever preferred_lft forever 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:35:19.261 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:19.261 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:35:19.261 altname enp152s0f1np1 00:35:19.261 altname ens817f1np1 00:35:19.261 inet 192.168.100.9/24 scope global mlx_0_1 00:35:19.261 valid_lft forever preferred_lft forever 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:35:19.261 192.168.100.9' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:35:19.261 192.168.100.9' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:35:19.261 192.168.100.9' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:35:19.261 14:03:10 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2329063 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2329063 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 2329063 ']' 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:19.262 14:03:10 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.262 [2024-06-11 14:03:10.989084] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:35:19.262 [2024-06-11 14:03:10.989151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.262 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.262 [2024-06-11 14:03:11.057000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:19.262 [2024-06-11 14:03:11.134823] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.262 [2024-06-11 14:03:11.134865] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.262 [2024-06-11 14:03:11.134873] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.262 [2024-06-11 14:03:11.134880] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.262 [2024-06-11 14:03:11.134886] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.262 [2024-06-11 14:03:11.135071] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.262 [2024-06-11 14:03:11.135142] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:19.262 [2024-06-11 14:03:11.135350] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.262 [2024-06-11 14:03:11.135350] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.262 [2024-06-11 14:03:11.815411] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xab9e90/0xabe380) succeed. 00:35:19.262 [2024-06-11 14:03:11.828434] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xabb4d0/0xaffa10) succeed. 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.262 14:03:11 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.262 Malloc0 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.262 [2024-06-11 14:03:12.040209] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.262 [ 00:35:19.262 { 00:35:19.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:19.262 "subtype": "Discovery", 00:35:19.262 "listen_addresses": [ 00:35:19.262 { 00:35:19.262 "trtype": "RDMA", 00:35:19.262 "adrfam": "IPv4", 00:35:19.262 "traddr": "192.168.100.8", 00:35:19.262 "trsvcid": "4420" 00:35:19.262 } 00:35:19.262 ], 00:35:19.262 "allow_any_host": true, 00:35:19.262 "hosts": [] 00:35:19.262 }, 00:35:19.262 { 00:35:19.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:19.262 "subtype": "NVMe", 00:35:19.262 "listen_addresses": [ 00:35:19.262 { 00:35:19.262 "trtype": "RDMA", 00:35:19.262 "adrfam": "IPv4", 00:35:19.262 "traddr": "192.168.100.8", 00:35:19.262 "trsvcid": "4420" 00:35:19.262 } 00:35:19.262 ], 00:35:19.262 "allow_any_host": true, 00:35:19.262 "hosts": [], 00:35:19.262 "serial_number": "SPDK00000000000001", 00:35:19.262 "model_number": "SPDK bdev Controller", 00:35:19.262 "max_namespaces": 32, 00:35:19.262 "min_cntlid": 1, 00:35:19.262 "max_cntlid": 65519, 00:35:19.262 "namespaces": [ 00:35:19.262 { 00:35:19.262 "nsid": 1, 00:35:19.262 "bdev_name": "Malloc0", 00:35:19.262 "name": "Malloc0", 00:35:19.262 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:35:19.262 "eui64": "ABCDEF0123456789", 00:35:19.262 "uuid": "f2314dae-25d2-4d32-8387-081c817481cd" 00:35:19.262 } 00:35:19.262 ] 00:35:19.262 } 00:35:19.262 ] 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.262 14:03:12 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:35:19.262 [2024-06-11 14:03:12.100379] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:35:19.262 [2024-06-11 14:03:12.100421] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329388 ] 00:35:19.262 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.262 [2024-06-11 14:03:12.157618] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:35:19.262 [2024-06-11 14:03:12.157701] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:35:19.262 [2024-06-11 14:03:12.157718] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:35:19.262 [2024-06-11 14:03:12.157722] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:35:19.262 [2024-06-11 14:03:12.157751] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:35:19.538 [2024-06-11 14:03:12.170963] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:35:19.538 [2024-06-11 14:03:12.188546] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:35:19.538 [2024-06-11 14:03:12.188555] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:35:19.538 [2024-06-11 14:03:12.188563] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188569] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188574] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188579] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188584] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188589] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188594] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188599] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188604] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188609] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188614] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188618] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188623] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188628] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188633] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188638] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188643] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188649] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188654] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188659] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188664] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188669] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188674] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188679] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188684] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188688] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188693] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188698] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188703] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188711] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188717] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188721] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:35:19.538 [2024-06-11 14:03:12.188725] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:35:19.538 [2024-06-11 14:03:12.188729] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:35:19.538 [2024-06-11 14:03:12.188746] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.188758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x183c00 00:35:19.538 [2024-06-11 14:03:12.195023] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.538 [2024-06-11 14:03:12.195031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:35:19.538 [2024-06-11 14:03:12.195040] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195050] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:35:19.538 [2024-06-11 14:03:12.195059] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:35:19.538 [2024-06-11 14:03:12.195066] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:35:19.538 [2024-06-11 14:03:12.195080] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.538 [2024-06-11 14:03:12.195110] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.538 [2024-06-11 14:03:12.195116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:35:19.538 [2024-06-11 14:03:12.195122] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:35:19.538 [2024-06-11 14:03:12.195126] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195132] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:35:19.538 [2024-06-11 14:03:12.195139] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.538 [2024-06-11 14:03:12.195166] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.538 [2024-06-11 14:03:12.195171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:35:19.538 [2024-06-11 14:03:12.195176] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:35:19.538 [2024-06-11 14:03:12.195181] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195187] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:35:19.538 [2024-06-11 14:03:12.195194] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.538 [2024-06-11 14:03:12.195224] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.538 [2024-06-11 14:03:12.195229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:19.538 [2024-06-11 14:03:12.195235] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:35:19.538 [2024-06-11 14:03:12.195239] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195247] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.538 [2024-06-11 14:03:12.195272] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.538 [2024-06-11 14:03:12.195276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:19.538 [2024-06-11 14:03:12.195281] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:35:19.538 [2024-06-11 14:03:12.195286] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:35:19.538 [2024-06-11 14:03:12.195291] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195296] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:35:19.538 [2024-06-11 14:03:12.195401] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:35:19.538 [2024-06-11 14:03:12.195406] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:35:19.538 [2024-06-11 14:03:12.195415] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.538 [2024-06-11 14:03:12.195445] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.538 [2024-06-11 14:03:12.195449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:19.538 [2024-06-11 14:03:12.195455] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:35:19.538 [2024-06-11 14:03:12.195459] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195467] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.538 [2024-06-11 14:03:12.195474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.538 [2024-06-11 14:03:12.195495] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.538 [2024-06-11 14:03:12.195500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:35:19.539 [2024-06-11 14:03:12.195505] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:35:19.539 [2024-06-11 14:03:12.195509] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:35:19.539 [2024-06-11 14:03:12.195514] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195522] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:35:19.539 [2024-06-11 14:03:12.195533] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:35:19.539 [2024-06-11 14:03:12.195542] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183c00 00:35:19.539 [2024-06-11 14:03:12.195591] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.539 [2024-06-11 14:03:12.195595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:19.539 [2024-06-11 14:03:12.195605] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:35:19.539 [2024-06-11 14:03:12.195610] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:35:19.539 [2024-06-11 14:03:12.195614] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:35:19.539 [2024-06-11 14:03:12.195619] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:35:19.539 [2024-06-11 14:03:12.195624] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:35:19.539 [2024-06-11 14:03:12.195628] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:35:19.539 [2024-06-11 14:03:12.195633] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195641] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:35:19.539 [2024-06-11 14:03:12.195651] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.539 [2024-06-11 14:03:12.195685] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.539 [2024-06-11 14:03:12.195690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:19.539 [2024-06-11 14:03:12.195698] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:19.539 [2024-06-11 14:03:12.195711] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:19.539 [2024-06-11 14:03:12.195724] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:19.539 [2024-06-11 14:03:12.195735] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:19.539 [2024-06-11 14:03:12.195746] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:35:19.539 [2024-06-11 14:03:12.195753] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195763] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:35:19.539 [2024-06-11 14:03:12.195769] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.539 [2024-06-11 14:03:12.195801] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.539 [2024-06-11 14:03:12.195806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:35:19.539 [2024-06-11 14:03:12.195813] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:35:19.539 [2024-06-11 14:03:12.195819] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:35:19.539 [2024-06-11 14:03:12.195824] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195834] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183c00 00:35:19.539 [2024-06-11 14:03:12.195871] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.539 [2024-06-11 14:03:12.195877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:19.539 [2024-06-11 14:03:12.195883] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195892] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:35:19.539 [2024-06-11 14:03:12.195914] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183c00 00:35:19.539 [2024-06-11 14:03:12.195928] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:35:19.539 [2024-06-11 14:03:12.195955] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.539 [2024-06-11 14:03:12.195960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:19.539 [2024-06-11 14:03:12.195971] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183c00 00:35:19.539 [2024-06-11 14:03:12.195982] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.195988] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.539 [2024-06-11 14:03:12.195994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:19.539 [2024-06-11 14:03:12.196000] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.196012] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.539 [2024-06-11 14:03:12.196020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:19.539 [2024-06-11 14:03:12.196029] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.196036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183c00 00:35:19.539 [2024-06-11 14:03:12.196043] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183c00 00:35:19.539 [2024-06-11 14:03:12.196070] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.539 [2024-06-11 14:03:12.196075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:19.539 [2024-06-11 14:03:12.196084] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183c00 00:35:19.539 ===================================================== 00:35:19.539 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:19.539 ===================================================== 00:35:19.539 Controller Capabilities/Features 00:35:19.539 ================================ 00:35:19.539 Vendor ID: 0000 00:35:19.539 Subsystem Vendor ID: 0000 00:35:19.539 Serial Number: .................... 00:35:19.539 Model Number: ........................................ 00:35:19.539 Firmware Version: 24.09 00:35:19.539 Recommended Arb Burst: 0 00:35:19.539 IEEE OUI Identifier: 00 00 00 00:35:19.539 Multi-path I/O 00:35:19.539 May have multiple subsystem ports: No 00:35:19.539 May have multiple controllers: No 00:35:19.539 Associated with SR-IOV VF: No 00:35:19.539 Max Data Transfer Size: 131072 00:35:19.539 Max Number of Namespaces: 0 00:35:19.539 Max Number of I/O Queues: 1024 00:35:19.539 NVMe Specification Version (VS): 1.3 00:35:19.539 NVMe Specification Version (Identify): 1.3 00:35:19.539 Maximum Queue Entries: 128 00:35:19.539 Contiguous Queues Required: Yes 00:35:19.539 Arbitration Mechanisms Supported 00:35:19.539 Weighted Round Robin: Not Supported 00:35:19.539 Vendor Specific: Not Supported 00:35:19.539 Reset Timeout: 15000 ms 00:35:19.539 Doorbell Stride: 4 bytes 00:35:19.539 NVM Subsystem Reset: Not Supported 00:35:19.539 Command Sets Supported 00:35:19.539 NVM Command Set: Supported 00:35:19.539 Boot Partition: Not Supported 00:35:19.539 Memory Page Size Minimum: 4096 bytes 00:35:19.539 Memory Page Size Maximum: 4096 bytes 00:35:19.539 Persistent Memory Region: Not Supported 00:35:19.539 Optional Asynchronous Events Supported 00:35:19.539 Namespace Attribute Notices: Not Supported 00:35:19.539 Firmware Activation Notices: Not Supported 00:35:19.539 ANA Change Notices: Not Supported 00:35:19.539 PLE Aggregate Log Change Notices: Not Supported 00:35:19.539 LBA Status Info Alert Notices: Not Supported 00:35:19.540 EGE Aggregate Log Change Notices: Not Supported 00:35:19.540 Normal NVM Subsystem Shutdown event: Not Supported 00:35:19.540 Zone Descriptor Change Notices: Not Supported 00:35:19.540 Discovery Log Change Notices: Supported 00:35:19.540 Controller Attributes 00:35:19.540 128-bit Host Identifier: Not Supported 00:35:19.540 Non-Operational Permissive Mode: Not Supported 00:35:19.540 NVM Sets: Not Supported 00:35:19.540 Read Recovery Levels: Not Supported 00:35:19.540 Endurance Groups: Not Supported 00:35:19.540 Predictable Latency Mode: Not Supported 00:35:19.540 Traffic Based Keep ALive: Not Supported 00:35:19.540 Namespace Granularity: Not Supported 00:35:19.540 SQ Associations: Not Supported 00:35:19.540 UUID List: Not Supported 00:35:19.540 Multi-Domain Subsystem: Not Supported 00:35:19.540 Fixed Capacity Management: Not Supported 00:35:19.540 Variable Capacity Management: Not Supported 00:35:19.540 Delete Endurance Group: Not Supported 00:35:19.540 Delete NVM Set: Not Supported 00:35:19.540 Extended LBA Formats Supported: Not Supported 00:35:19.540 Flexible Data Placement Supported: Not Supported 00:35:19.540 00:35:19.540 Controller Memory Buffer Support 00:35:19.540 ================================ 00:35:19.540 Supported: No 00:35:19.540 00:35:19.540 Persistent Memory Region Support 00:35:19.540 ================================ 00:35:19.540 Supported: No 00:35:19.540 00:35:19.540 Admin Command Set Attributes 00:35:19.540 ============================ 00:35:19.540 Security Send/Receive: Not Supported 00:35:19.540 Format NVM: Not Supported 00:35:19.540 Firmware Activate/Download: Not Supported 00:35:19.540 Namespace Management: Not Supported 00:35:19.540 Device Self-Test: Not Supported 00:35:19.540 Directives: Not Supported 00:35:19.540 NVMe-MI: Not Supported 00:35:19.540 Virtualization Management: Not Supported 00:35:19.540 Doorbell Buffer Config: Not Supported 00:35:19.540 Get LBA Status Capability: Not Supported 00:35:19.540 Command & Feature Lockdown Capability: Not Supported 00:35:19.540 Abort Command Limit: 1 00:35:19.540 Async Event Request Limit: 4 00:35:19.540 Number of Firmware Slots: N/A 00:35:19.540 Firmware Slot 1 Read-Only: N/A 00:35:19.540 Firmware Activation Without Reset: N/A 00:35:19.540 Multiple Update Detection Support: N/A 00:35:19.540 Firmware Update Granularity: No Information Provided 00:35:19.540 Per-Namespace SMART Log: No 00:35:19.540 Asymmetric Namespace Access Log Page: Not Supported 00:35:19.540 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:19.540 Command Effects Log Page: Not Supported 00:35:19.540 Get Log Page Extended Data: Supported 00:35:19.540 Telemetry Log Pages: Not Supported 00:35:19.540 Persistent Event Log Pages: Not Supported 00:35:19.540 Supported Log Pages Log Page: May Support 00:35:19.540 Commands Supported & Effects Log Page: Not Supported 00:35:19.540 Feature Identifiers & Effects Log Page:May Support 00:35:19.540 NVMe-MI Commands & Effects Log Page: May Support 00:35:19.540 Data Area 4 for Telemetry Log: Not Supported 00:35:19.540 Error Log Page Entries Supported: 128 00:35:19.540 Keep Alive: Not Supported 00:35:19.540 00:35:19.540 NVM Command Set Attributes 00:35:19.540 ========================== 00:35:19.540 Submission Queue Entry Size 00:35:19.540 Max: 1 00:35:19.540 Min: 1 00:35:19.540 Completion Queue Entry Size 00:35:19.540 Max: 1 00:35:19.540 Min: 1 00:35:19.540 Number of Namespaces: 0 00:35:19.540 Compare Command: Not Supported 00:35:19.540 Write Uncorrectable Command: Not Supported 00:35:19.540 Dataset Management Command: Not Supported 00:35:19.540 Write Zeroes Command: Not Supported 00:35:19.540 Set Features Save Field: Not Supported 00:35:19.540 Reservations: Not Supported 00:35:19.540 Timestamp: Not Supported 00:35:19.540 Copy: Not Supported 00:35:19.540 Volatile Write Cache: Not Present 00:35:19.540 Atomic Write Unit (Normal): 1 00:35:19.540 Atomic Write Unit (PFail): 1 00:35:19.540 Atomic Compare & Write Unit: 1 00:35:19.540 Fused Compare & Write: Supported 00:35:19.540 Scatter-Gather List 00:35:19.540 SGL Command Set: Supported 00:35:19.540 SGL Keyed: Supported 00:35:19.540 SGL Bit Bucket Descriptor: Not Supported 00:35:19.540 SGL Metadata Pointer: Not Supported 00:35:19.540 Oversized SGL: Not Supported 00:35:19.540 SGL Metadata Address: Not Supported 00:35:19.540 SGL Offset: Supported 00:35:19.540 Transport SGL Data Block: Not Supported 00:35:19.540 Replay Protected Memory Block: Not Supported 00:35:19.540 00:35:19.540 Firmware Slot Information 00:35:19.540 ========================= 00:35:19.540 Active slot: 0 00:35:19.540 00:35:19.540 00:35:19.540 Error Log 00:35:19.540 ========= 00:35:19.540 00:35:19.540 Active Namespaces 00:35:19.540 ================= 00:35:19.540 Discovery Log Page 00:35:19.540 ================== 00:35:19.540 Generation Counter: 2 00:35:19.540 Number of Records: 2 00:35:19.540 Record Format: 0 00:35:19.540 00:35:19.540 Discovery Log Entry 0 00:35:19.540 ---------------------- 00:35:19.540 Transport Type: 1 (RDMA) 00:35:19.540 Address Family: 1 (IPv4) 00:35:19.540 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:19.540 Entry Flags: 00:35:19.540 Duplicate Returned Information: 1 00:35:19.540 Explicit Persistent Connection Support for Discovery: 1 00:35:19.540 Transport Requirements: 00:35:19.540 Secure Channel: Not Required 00:35:19.540 Port ID: 0 (0x0000) 00:35:19.540 Controller ID: 65535 (0xffff) 00:35:19.540 Admin Max SQ Size: 128 00:35:19.540 Transport Service Identifier: 4420 00:35:19.540 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:19.540 Transport Address: 192.168.100.8 00:35:19.540 Transport Specific Address Subtype - RDMA 00:35:19.540 RDMA QP Service Type: 1 (Reliable Connected) 00:35:19.540 RDMA Provider Type: 1 (No provider specified) 00:35:19.540 RDMA CM Service: 1 (RDMA_CM) 00:35:19.540 Discovery Log Entry 1 00:35:19.540 ---------------------- 00:35:19.540 Transport Type: 1 (RDMA) 00:35:19.540 Address Family: 1 (IPv4) 00:35:19.540 Subsystem Type: 2 (NVM Subsystem) 00:35:19.540 Entry Flags: 00:35:19.540 Duplicate Returned Information: 0 00:35:19.540 Explicit Persistent Connection Support for Discovery: 0 00:35:19.540 Transport Requirements: 00:35:19.540 Secure Channel: Not Required 00:35:19.540 Port ID: 0 (0x0000) 00:35:19.540 Controller ID: 65535 (0xffff) 00:35:19.540 Admin Max SQ Size: [2024-06-11 14:03:12.196161] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:35:19.540 [2024-06-11 14:03:12.196170] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 14600 doesn't match qid 00:35:19.540 [2024-06-11 14:03:12.196184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32610 cdw0:5 sqhd:3030 p:0 m:0 dnr:0 00:35:19.540 [2024-06-11 14:03:12.196189] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 14600 doesn't match qid 00:35:19.540 [2024-06-11 14:03:12.196196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32610 cdw0:5 sqhd:3030 p:0 m:0 dnr:0 00:35:19.540 [2024-06-11 14:03:12.196201] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 14600 doesn't match qid 00:35:19.540 [2024-06-11 14:03:12.196207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32610 cdw0:5 sqhd:3030 p:0 m:0 dnr:0 00:35:19.540 [2024-06-11 14:03:12.196212] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 14600 doesn't match qid 00:35:19.540 [2024-06-11 14:03:12.196219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32610 cdw0:5 sqhd:3030 p:0 m:0 dnr:0 00:35:19.540 [2024-06-11 14:03:12.196226] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183c00 00:35:19.540 [2024-06-11 14:03:12.196233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.540 [2024-06-11 14:03:12.196253] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.540 [2024-06-11 14:03:12.196258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:35:19.540 [2024-06-11 14:03:12.196265] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.540 [2024-06-11 14:03:12.196272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.540 [2024-06-11 14:03:12.196277] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183c00 00:35:19.540 [2024-06-11 14:03:12.196296] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.540 [2024-06-11 14:03:12.196301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:19.540 [2024-06-11 14:03:12.196306] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:35:19.540 [2024-06-11 14:03:12.196310] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:35:19.540 [2024-06-11 14:03:12.196315] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183c00 00:35:19.540 [2024-06-11 14:03:12.196323] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.540 [2024-06-11 14:03:12.196331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196350] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196360] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196369] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196395] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196407] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196416] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196443] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196452] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196461] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196491] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196501] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196510] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196536] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196546] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196554] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196580] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196590] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196600] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196629] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196639] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196648] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196679] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196689] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196697] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196722] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196732] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196740] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196766] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196776] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196784] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196813] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196822] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196831] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196864] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196873] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196884] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196911] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196920] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196928] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.196953] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.196958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.196963] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196972] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.196979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.197002] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.197007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.197013] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.197026] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.197033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.197056] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.197060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.197066] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.197074] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.197081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.197105] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.197109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.197115] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.197123] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.197130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.197160] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.541 [2024-06-11 14:03:12.197164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:35:19.541 [2024-06-11 14:03:12.197171] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.197179] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.541 [2024-06-11 14:03:12.197186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.541 [2024-06-11 14:03:12.197206] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197216] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197224] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197249] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197258] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197267] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197296] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197305] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197313] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197338] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197348] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197356] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197385] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197395] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197403] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197438] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197448] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197457] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197484] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197493] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197502] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197529] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197539] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197548] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197575] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197585] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197593] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197627] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197637] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197646] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197671] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197683] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197692] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197717] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197728] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197736] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197765] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197776] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197785] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197812] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197823] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197832] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.542 [2024-06-11 14:03:12.197861] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.542 [2024-06-11 14:03:12.197865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:35:19.542 [2024-06-11 14:03:12.197871] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183c00 00:35:19.542 [2024-06-11 14:03:12.197879] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.197887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.197907] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.197912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.197917] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.197925] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.197933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.197954] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.197958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.197964] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.197973] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.197980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198005] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198021] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198029] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198062] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198072] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198080] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198105] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198115] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198124] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198149] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198158] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198166] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198195] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198205] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198213] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198240] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198250] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198258] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198286] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198296] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198304] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198331] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198341] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198349] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198380] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198390] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198398] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198429] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198438] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198447] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198475] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198485] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198493] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198520] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198530] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198538] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198564] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198574] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198582] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198611] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198621] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198629] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198658] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198668] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198676] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.543 [2024-06-11 14:03:12.198703] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.543 [2024-06-11 14:03:12.198707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:35:19.543 [2024-06-11 14:03:12.198713] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183c00 00:35:19.543 [2024-06-11 14:03:12.198721] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.198728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.544 [2024-06-11 14:03:12.198750] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.544 [2024-06-11 14:03:12.198754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:35:19.544 [2024-06-11 14:03:12.198759] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.198768] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.198774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.544 [2024-06-11 14:03:12.198796] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.544 [2024-06-11 14:03:12.198801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:35:19.544 [2024-06-11 14:03:12.198806] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.198814] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.198824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.544 [2024-06-11 14:03:12.198844] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.544 [2024-06-11 14:03:12.198849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:35:19.544 [2024-06-11 14:03:12.198854] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.198862] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.198869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.544 [2024-06-11 14:03:12.198889] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.544 [2024-06-11 14:03:12.198893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:35:19.544 [2024-06-11 14:03:12.198898] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.198907] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.198913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.544 [2024-06-11 14:03:12.198935] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.544 [2024-06-11 14:03:12.198940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:35:19.544 [2024-06-11 14:03:12.198945] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.198953] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.198960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.544 [2024-06-11 14:03:12.198984] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.544 [2024-06-11 14:03:12.198989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:35:19.544 [2024-06-11 14:03:12.198994] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.199002] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.199009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.544 [2024-06-11 14:03:12.203023] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.544 [2024-06-11 14:03:12.203030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:35:19.544 [2024-06-11 14:03:12.203035] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.203044] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.203051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.544 [2024-06-11 14:03:12.203069] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.544 [2024-06-11 14:03:12.203074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:35:19.544 [2024-06-11 14:03:12.203079] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.203085] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:35:19.544 128 00:35:19.544 Transport Service Identifier: 4420 00:35:19.544 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:35:19.544 Transport Address: 192.168.100.8 00:35:19.544 Transport Specific Address Subtype - RDMA 00:35:19.544 RDMA QP Service Type: 1 (Reliable Connected) 00:35:19.544 RDMA Provider Type: 1 (No provider specified) 00:35:19.544 RDMA CM Service: 1 (RDMA_CM) 00:35:19.544 14:03:12 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:35:19.544 [2024-06-11 14:03:12.289272] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:35:19.544 [2024-06-11 14:03:12.289313] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329393 ] 00:35:19.544 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.544 [2024-06-11 14:03:12.344649] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:35:19.544 [2024-06-11 14:03:12.344727] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:35:19.544 [2024-06-11 14:03:12.344743] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:35:19.544 [2024-06-11 14:03:12.344747] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:35:19.544 [2024-06-11 14:03:12.344771] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:35:19.544 [2024-06-11 14:03:12.365959] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:35:19.544 [2024-06-11 14:03:12.383623] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:35:19.544 [2024-06-11 14:03:12.383632] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:35:19.544 [2024-06-11 14:03:12.383639] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383645] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383650] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383655] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383660] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383665] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383670] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383675] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383680] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383685] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383690] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383695] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383700] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383708] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383713] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383718] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383723] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383728] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383733] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383738] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383742] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383747] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383752] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383757] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383762] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383767] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383772] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383777] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383782] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383787] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383792] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183c00 00:35:19.544 [2024-06-11 14:03:12.383796] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:35:19.544 [2024-06-11 14:03:12.383800] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:35:19.544 [2024-06-11 14:03:12.383804] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:35:19.544 [2024-06-11 14:03:12.383818] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.383830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x183c00 00:35:19.545 [2024-06-11 14:03:12.390023] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.545 [2024-06-11 14:03:12.390031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:35:19.545 [2024-06-11 14:03:12.390037] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390043] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:35:19.545 [2024-06-11 14:03:12.390049] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:35:19.545 [2024-06-11 14:03:12.390054] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:35:19.545 [2024-06-11 14:03:12.390064] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.545 [2024-06-11 14:03:12.390088] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.545 [2024-06-11 14:03:12.390093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:35:19.545 [2024-06-11 14:03:12.390099] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:35:19.545 [2024-06-11 14:03:12.390103] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390109] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:35:19.545 [2024-06-11 14:03:12.390116] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.545 [2024-06-11 14:03:12.390139] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.545 [2024-06-11 14:03:12.390143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:35:19.545 [2024-06-11 14:03:12.390149] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:35:19.545 [2024-06-11 14:03:12.390153] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390159] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:35:19.545 [2024-06-11 14:03:12.390166] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.545 [2024-06-11 14:03:12.390187] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.545 [2024-06-11 14:03:12.390192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:19.545 [2024-06-11 14:03:12.390197] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:35:19.545 [2024-06-11 14:03:12.390202] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390210] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.545 [2024-06-11 14:03:12.390235] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.545 [2024-06-11 14:03:12.390240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:19.545 [2024-06-11 14:03:12.390244] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:35:19.545 [2024-06-11 14:03:12.390249] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:35:19.545 [2024-06-11 14:03:12.390253] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390259] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:35:19.545 [2024-06-11 14:03:12.390365] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:35:19.545 [2024-06-11 14:03:12.390368] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:35:19.545 [2024-06-11 14:03:12.390380] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.545 [2024-06-11 14:03:12.390403] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.545 [2024-06-11 14:03:12.390408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:19.545 [2024-06-11 14:03:12.390413] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:35:19.545 [2024-06-11 14:03:12.390418] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390425] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.545 [2024-06-11 14:03:12.390449] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.545 [2024-06-11 14:03:12.390453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:35:19.545 [2024-06-11 14:03:12.390458] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:35:19.545 [2024-06-11 14:03:12.390463] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:35:19.545 [2024-06-11 14:03:12.390467] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390473] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:35:19.545 [2024-06-11 14:03:12.390484] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:35:19.545 [2024-06-11 14:03:12.390492] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183c00 00:35:19.545 [2024-06-11 14:03:12.390531] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.545 [2024-06-11 14:03:12.390536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:19.545 [2024-06-11 14:03:12.390543] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:35:19.545 [2024-06-11 14:03:12.390547] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:35:19.545 [2024-06-11 14:03:12.390551] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:35:19.545 [2024-06-11 14:03:12.390556] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:35:19.545 [2024-06-11 14:03:12.390560] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:35:19.545 [2024-06-11 14:03:12.390565] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:35:19.545 [2024-06-11 14:03:12.390569] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390578] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:35:19.545 [2024-06-11 14:03:12.390586] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.545 [2024-06-11 14:03:12.390613] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.545 [2024-06-11 14:03:12.390618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:19.545 [2024-06-11 14:03:12.390625] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:19.545 [2024-06-11 14:03:12.390638] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:19.545 [2024-06-11 14:03:12.390649] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:19.545 [2024-06-11 14:03:12.390661] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:19.545 [2024-06-11 14:03:12.390672] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:35:19.545 [2024-06-11 14:03:12.390676] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390685] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:35:19.545 [2024-06-11 14:03:12.390692] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.545 [2024-06-11 14:03:12.390699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.545 [2024-06-11 14:03:12.390715] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.545 [2024-06-11 14:03:12.390720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:35:19.545 [2024-06-11 14:03:12.390725] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:35:19.545 [2024-06-11 14:03:12.390730] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:35:19.545 [2024-06-11 14:03:12.390735] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.390741] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.390748] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.390755] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.390761] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.546 [2024-06-11 14:03:12.390780] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.390785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.390837] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.390842] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.390849] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.390857] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.390864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183c00 00:35:19.546 [2024-06-11 14:03:12.390884] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.390889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.390898] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:35:19.546 [2024-06-11 14:03:12.390907] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.390912] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.390919] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.390926] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.390933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183c00 00:35:19.546 [2024-06-11 14:03:12.390959] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.390964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.390975] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.390980] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.390987] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.390994] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183c00 00:35:19.546 [2024-06-11 14:03:12.391025] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.391030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.391037] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.391042] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391048] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.391055] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.391063] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.391068] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.391073] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:35:19.546 [2024-06-11 14:03:12.391077] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:35:19.546 [2024-06-11 14:03:12.391082] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:35:19.546 [2024-06-11 14:03:12.391098] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.546 [2024-06-11 14:03:12.391112] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:35:19.546 [2024-06-11 14:03:12.391127] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.391132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.391137] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391142] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.391147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.391152] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391159] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391166] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.546 [2024-06-11 14:03:12.391182] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.391186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.391191] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391199] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.546 [2024-06-11 14:03:12.391218] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.391223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.391228] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391236] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.546 [2024-06-11 14:03:12.391256] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.391260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.391267] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391277] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183c00 00:35:19.546 [2024-06-11 14:03:12.391291] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183c00 00:35:19.546 [2024-06-11 14:03:12.391305] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183c00 00:35:19.546 [2024-06-11 14:03:12.391319] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183c00 00:35:19.546 [2024-06-11 14:03:12.391333] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.391338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.391348] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391353] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.391358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.391365] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391370] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.391375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.391382] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183c00 00:35:19.546 [2024-06-11 14:03:12.391387] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.546 [2024-06-11 14:03:12.391391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:19.546 [2024-06-11 14:03:12.391399] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183c00 00:35:19.546 ===================================================== 00:35:19.546 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:19.546 ===================================================== 00:35:19.546 Controller Capabilities/Features 00:35:19.546 ================================ 00:35:19.546 Vendor ID: 8086 00:35:19.546 Subsystem Vendor ID: 8086 00:35:19.546 Serial Number: SPDK00000000000001 00:35:19.546 Model Number: SPDK bdev Controller 00:35:19.547 Firmware Version: 24.09 00:35:19.547 Recommended Arb Burst: 6 00:35:19.547 IEEE OUI Identifier: e4 d2 5c 00:35:19.547 Multi-path I/O 00:35:19.547 May have multiple subsystem ports: Yes 00:35:19.547 May have multiple controllers: Yes 00:35:19.547 Associated with SR-IOV VF: No 00:35:19.547 Max Data Transfer Size: 131072 00:35:19.547 Max Number of Namespaces: 32 00:35:19.547 Max Number of I/O Queues: 127 00:35:19.547 NVMe Specification Version (VS): 1.3 00:35:19.547 NVMe Specification Version (Identify): 1.3 00:35:19.547 Maximum Queue Entries: 128 00:35:19.547 Contiguous Queues Required: Yes 00:35:19.547 Arbitration Mechanisms Supported 00:35:19.547 Weighted Round Robin: Not Supported 00:35:19.547 Vendor Specific: Not Supported 00:35:19.547 Reset Timeout: 15000 ms 00:35:19.547 Doorbell Stride: 4 bytes 00:35:19.547 NVM Subsystem Reset: Not Supported 00:35:19.547 Command Sets Supported 00:35:19.547 NVM Command Set: Supported 00:35:19.547 Boot Partition: Not Supported 00:35:19.547 Memory Page Size Minimum: 4096 bytes 00:35:19.547 Memory Page Size Maximum: 4096 bytes 00:35:19.547 Persistent Memory Region: Not Supported 00:35:19.547 Optional Asynchronous Events Supported 00:35:19.547 Namespace Attribute Notices: Supported 00:35:19.547 Firmware Activation Notices: Not Supported 00:35:19.547 ANA Change Notices: Not Supported 00:35:19.547 PLE Aggregate Log Change Notices: Not Supported 00:35:19.547 LBA Status Info Alert Notices: Not Supported 00:35:19.547 EGE Aggregate Log Change Notices: Not Supported 00:35:19.547 Normal NVM Subsystem Shutdown event: Not Supported 00:35:19.547 Zone Descriptor Change Notices: Not Supported 00:35:19.547 Discovery Log Change Notices: Not Supported 00:35:19.547 Controller Attributes 00:35:19.547 128-bit Host Identifier: Supported 00:35:19.547 Non-Operational Permissive Mode: Not Supported 00:35:19.547 NVM Sets: Not Supported 00:35:19.547 Read Recovery Levels: Not Supported 00:35:19.547 Endurance Groups: Not Supported 00:35:19.547 Predictable Latency Mode: Not Supported 00:35:19.547 Traffic Based Keep ALive: Not Supported 00:35:19.547 Namespace Granularity: Not Supported 00:35:19.547 SQ Associations: Not Supported 00:35:19.547 UUID List: Not Supported 00:35:19.547 Multi-Domain Subsystem: Not Supported 00:35:19.547 Fixed Capacity Management: Not Supported 00:35:19.547 Variable Capacity Management: Not Supported 00:35:19.547 Delete Endurance Group: Not Supported 00:35:19.547 Delete NVM Set: Not Supported 00:35:19.547 Extended LBA Formats Supported: Not Supported 00:35:19.547 Flexible Data Placement Supported: Not Supported 00:35:19.547 00:35:19.547 Controller Memory Buffer Support 00:35:19.547 ================================ 00:35:19.547 Supported: No 00:35:19.547 00:35:19.547 Persistent Memory Region Support 00:35:19.547 ================================ 00:35:19.547 Supported: No 00:35:19.547 00:35:19.547 Admin Command Set Attributes 00:35:19.547 ============================ 00:35:19.547 Security Send/Receive: Not Supported 00:35:19.547 Format NVM: Not Supported 00:35:19.547 Firmware Activate/Download: Not Supported 00:35:19.547 Namespace Management: Not Supported 00:35:19.547 Device Self-Test: Not Supported 00:35:19.547 Directives: Not Supported 00:35:19.547 NVMe-MI: Not Supported 00:35:19.547 Virtualization Management: Not Supported 00:35:19.547 Doorbell Buffer Config: Not Supported 00:35:19.547 Get LBA Status Capability: Not Supported 00:35:19.547 Command & Feature Lockdown Capability: Not Supported 00:35:19.547 Abort Command Limit: 4 00:35:19.547 Async Event Request Limit: 4 00:35:19.547 Number of Firmware Slots: N/A 00:35:19.547 Firmware Slot 1 Read-Only: N/A 00:35:19.547 Firmware Activation Without Reset: N/A 00:35:19.547 Multiple Update Detection Support: N/A 00:35:19.547 Firmware Update Granularity: No Information Provided 00:35:19.547 Per-Namespace SMART Log: No 00:35:19.547 Asymmetric Namespace Access Log Page: Not Supported 00:35:19.547 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:35:19.547 Command Effects Log Page: Supported 00:35:19.547 Get Log Page Extended Data: Supported 00:35:19.547 Telemetry Log Pages: Not Supported 00:35:19.547 Persistent Event Log Pages: Not Supported 00:35:19.547 Supported Log Pages Log Page: May Support 00:35:19.547 Commands Supported & Effects Log Page: Not Supported 00:35:19.547 Feature Identifiers & Effects Log Page:May Support 00:35:19.547 NVMe-MI Commands & Effects Log Page: May Support 00:35:19.547 Data Area 4 for Telemetry Log: Not Supported 00:35:19.547 Error Log Page Entries Supported: 128 00:35:19.547 Keep Alive: Supported 00:35:19.547 Keep Alive Granularity: 10000 ms 00:35:19.547 00:35:19.547 NVM Command Set Attributes 00:35:19.547 ========================== 00:35:19.547 Submission Queue Entry Size 00:35:19.547 Max: 64 00:35:19.547 Min: 64 00:35:19.547 Completion Queue Entry Size 00:35:19.547 Max: 16 00:35:19.547 Min: 16 00:35:19.547 Number of Namespaces: 32 00:35:19.547 Compare Command: Supported 00:35:19.547 Write Uncorrectable Command: Not Supported 00:35:19.547 Dataset Management Command: Supported 00:35:19.547 Write Zeroes Command: Supported 00:35:19.547 Set Features Save Field: Not Supported 00:35:19.547 Reservations: Supported 00:35:19.547 Timestamp: Not Supported 00:35:19.547 Copy: Supported 00:35:19.547 Volatile Write Cache: Present 00:35:19.547 Atomic Write Unit (Normal): 1 00:35:19.547 Atomic Write Unit (PFail): 1 00:35:19.547 Atomic Compare & Write Unit: 1 00:35:19.547 Fused Compare & Write: Supported 00:35:19.547 Scatter-Gather List 00:35:19.547 SGL Command Set: Supported 00:35:19.547 SGL Keyed: Supported 00:35:19.547 SGL Bit Bucket Descriptor: Not Supported 00:35:19.547 SGL Metadata Pointer: Not Supported 00:35:19.547 Oversized SGL: Not Supported 00:35:19.547 SGL Metadata Address: Not Supported 00:35:19.547 SGL Offset: Supported 00:35:19.547 Transport SGL Data Block: Not Supported 00:35:19.547 Replay Protected Memory Block: Not Supported 00:35:19.547 00:35:19.547 Firmware Slot Information 00:35:19.547 ========================= 00:35:19.547 Active slot: 1 00:35:19.547 Slot 1 Firmware Revision: 24.09 00:35:19.547 00:35:19.547 00:35:19.547 Commands Supported and Effects 00:35:19.547 ============================== 00:35:19.547 Admin Commands 00:35:19.547 -------------- 00:35:19.547 Get Log Page (02h): Supported 00:35:19.547 Identify (06h): Supported 00:35:19.547 Abort (08h): Supported 00:35:19.547 Set Features (09h): Supported 00:35:19.547 Get Features (0Ah): Supported 00:35:19.547 Asynchronous Event Request (0Ch): Supported 00:35:19.547 Keep Alive (18h): Supported 00:35:19.547 I/O Commands 00:35:19.547 ------------ 00:35:19.547 Flush (00h): Supported LBA-Change 00:35:19.547 Write (01h): Supported LBA-Change 00:35:19.547 Read (02h): Supported 00:35:19.547 Compare (05h): Supported 00:35:19.547 Write Zeroes (08h): Supported LBA-Change 00:35:19.547 Dataset Management (09h): Supported LBA-Change 00:35:19.547 Copy (19h): Supported LBA-Change 00:35:19.547 Unknown (79h): Supported LBA-Change 00:35:19.547 Unknown (7Ah): Supported 00:35:19.547 00:35:19.547 Error Log 00:35:19.547 ========= 00:35:19.547 00:35:19.547 Arbitration 00:35:19.547 =========== 00:35:19.547 Arbitration Burst: 1 00:35:19.547 00:35:19.547 Power Management 00:35:19.547 ================ 00:35:19.547 Number of Power States: 1 00:35:19.547 Current Power State: Power State #0 00:35:19.547 Power State #0: 00:35:19.547 Max Power: 0.00 W 00:35:19.547 Non-Operational State: Operational 00:35:19.547 Entry Latency: Not Reported 00:35:19.547 Exit Latency: Not Reported 00:35:19.547 Relative Read Throughput: 0 00:35:19.547 Relative Read Latency: 0 00:35:19.547 Relative Write Throughput: 0 00:35:19.548 Relative Write Latency: 0 00:35:19.548 Idle Power: Not Reported 00:35:19.548 Active Power: Not Reported 00:35:19.548 Non-Operational Permissive Mode: Not Supported 00:35:19.548 00:35:19.548 Health Information 00:35:19.548 ================== 00:35:19.548 Critical Warnings: 00:35:19.548 Available Spare Space: OK 00:35:19.548 Temperature: OK 00:35:19.548 Device Reliability: OK 00:35:19.548 Read Only: No 00:35:19.548 Volatile Memory Backup: OK 00:35:19.548 Current Temperature: 0 Kelvin (-273 Celsius) 00:35:19.548 Temperature Threshold: [2024-06-11 14:03:12.391490] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.391514] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.391519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391524] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391550] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:35:19.548 [2024-06-11 14:03:12.391559] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51625 doesn't match qid 00:35:19.548 [2024-06-11 14:03:12.391572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:5 sqhd:3030 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391578] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51625 doesn't match qid 00:35:19.548 [2024-06-11 14:03:12.391584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:5 sqhd:3030 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391589] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51625 doesn't match qid 00:35:19.548 [2024-06-11 14:03:12.391595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:5 sqhd:3030 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391601] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 51625 doesn't match qid 00:35:19.548 [2024-06-11 14:03:12.391607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32556 cdw0:5 sqhd:3030 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391614] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.391644] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.391649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391656] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.391668] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391681] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.391686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391691] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:35:19.548 [2024-06-11 14:03:12.391695] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:35:19.548 [2024-06-11 14:03:12.391700] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391708] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.391733] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.391738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391743] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391752] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.391771] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.391776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391781] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391792] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.391814] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.391819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391824] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391832] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.391858] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.391863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391868] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391877] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.391899] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.391904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391909] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391918] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.391944] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.391948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391954] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391963] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.391970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.391988] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.391993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.391998] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.392006] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.392013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.392031] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.392036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.392043] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.392051] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.392058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.392075] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.392079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.392084] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.392093] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.392099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.392116] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.392120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.392126] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.392134] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.392141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.392155] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.392160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:35:19.548 [2024-06-11 14:03:12.392165] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.392173] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.548 [2024-06-11 14:03:12.392180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.548 [2024-06-11 14:03:12.392200] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.548 [2024-06-11 14:03:12.392205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392210] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392218] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392243] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392253] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392261] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392283] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392294] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392302] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392323] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392333] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392341] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392365] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392374] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392382] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392406] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392415] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392423] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392452] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392462] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392470] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392492] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392501] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392510] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392531] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392542] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392550] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392571] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392581] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392589] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392610] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392620] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392628] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392653] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392663] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392671] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392691] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392700] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392708] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392730] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392739] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392748] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392769] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392780] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392788] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392808] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392817] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392825] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392845] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392855] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392863] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392888] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:35:19.549 [2024-06-11 14:03:12.392898] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392906] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.549 [2024-06-11 14:03:12.392913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.549 [2024-06-11 14:03:12.392927] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.549 [2024-06-11 14:03:12.392931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.392937] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.392945] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.392952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.392964] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.392969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.392974] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.392982] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.392989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393005] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393014] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393026] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393047] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393057] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393065] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393085] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393095] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393103] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393124] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393134] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393142] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393163] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393173] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393181] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393200] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393210] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393218] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393241] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393250] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393259] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393280] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393290] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393298] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393321] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393330] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393339] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393364] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393373] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393382] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393405] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393414] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393423] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393442] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393452] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393460] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393484] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393494] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393502] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393521] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393531] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393539] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393561] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393570] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393578] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393600] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393609] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393618] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.550 [2024-06-11 14:03:12.393637] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.550 [2024-06-11 14:03:12.393641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:35:19.550 [2024-06-11 14:03:12.393646] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x183c00 00:35:19.550 [2024-06-11 14:03:12.393655] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.551 [2024-06-11 14:03:12.393676] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.551 [2024-06-11 14:03:12.393680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:35:19.551 [2024-06-11 14:03:12.393686] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393694] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.551 [2024-06-11 14:03:12.393718] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.551 [2024-06-11 14:03:12.393723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:35:19.551 [2024-06-11 14:03:12.393728] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393736] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.551 [2024-06-11 14:03:12.393763] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.551 [2024-06-11 14:03:12.393768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:35:19.551 [2024-06-11 14:03:12.393773] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393781] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.551 [2024-06-11 14:03:12.393800] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.551 [2024-06-11 14:03:12.393805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:35:19.551 [2024-06-11 14:03:12.393810] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393818] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.551 [2024-06-11 14:03:12.393839] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.551 [2024-06-11 14:03:12.393844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:35:19.551 [2024-06-11 14:03:12.393849] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393857] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.551 [2024-06-11 14:03:12.393878] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.551 [2024-06-11 14:03:12.393883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:35:19.551 [2024-06-11 14:03:12.393888] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393896] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.551 [2024-06-11 14:03:12.393916] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.551 [2024-06-11 14:03:12.393920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:35:19.551 [2024-06-11 14:03:12.393925] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393935] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.551 [2024-06-11 14:03:12.393960] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.551 [2024-06-11 14:03:12.393964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:35:19.551 [2024-06-11 14:03:12.393970] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393978] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.393984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.551 [2024-06-11 14:03:12.393999] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.551 [2024-06-11 14:03:12.394003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:35:19.551 [2024-06-11 14:03:12.394009] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.398020] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.398029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:35:19.551 [2024-06-11 14:03:12.398042] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:35:19.551 [2024-06-11 14:03:12.398047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0014 p:0 m:0 dnr:0 00:35:19.551 [2024-06-11 14:03:12.398052] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x183c00 00:35:19.551 [2024-06-11 14:03:12.398058] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:35:19.837 0 Kelvin (-273 Celsius) 00:35:19.837 Available Spare: 0% 00:35:19.837 Available Spare Threshold: 0% 00:35:19.837 Life Percentage Used: 0% 00:35:19.837 Data Units Read: 0 00:35:19.837 Data Units Written: 0 00:35:19.837 Host Read Commands: 0 00:35:19.837 Host Write Commands: 0 00:35:19.837 Controller Busy Time: 0 minutes 00:35:19.837 Power Cycles: 0 00:35:19.837 Power On Hours: 0 hours 00:35:19.837 Unsafe Shutdowns: 0 00:35:19.837 Unrecoverable Media Errors: 0 00:35:19.837 Lifetime Error Log Entries: 0 00:35:19.837 Warning Temperature Time: 0 minutes 00:35:19.837 Critical Temperature Time: 0 minutes 00:35:19.837 00:35:19.837 Number of Queues 00:35:19.837 ================ 00:35:19.837 Number of I/O Submission Queues: 127 00:35:19.837 Number of I/O Completion Queues: 127 00:35:19.837 00:35:19.837 Active Namespaces 00:35:19.837 ================= 00:35:19.837 Namespace ID:1 00:35:19.837 Error Recovery Timeout: Unlimited 00:35:19.837 Command Set Identifier: NVM (00h) 00:35:19.837 Deallocate: Supported 00:35:19.837 Deallocated/Unwritten Error: Not Supported 00:35:19.837 Deallocated Read Value: Unknown 00:35:19.837 Deallocate in Write Zeroes: Not Supported 00:35:19.837 Deallocated Guard Field: 0xFFFF 00:35:19.837 Flush: Supported 00:35:19.837 Reservation: Supported 00:35:19.837 Namespace Sharing Capabilities: Multiple Controllers 00:35:19.837 Size (in LBAs): 131072 (0GiB) 00:35:19.837 Capacity (in LBAs): 131072 (0GiB) 00:35:19.837 Utilization (in LBAs): 131072 (0GiB) 00:35:19.837 NGUID: ABCDEF0123456789ABCDEF0123456789 00:35:19.837 EUI64: ABCDEF0123456789 00:35:19.837 UUID: f2314dae-25d2-4d32-8387-081c817481cd 00:35:19.837 Thin Provisioning: Not Supported 00:35:19.837 Per-NS Atomic Units: Yes 00:35:19.837 Atomic Boundary Size (Normal): 0 00:35:19.837 Atomic Boundary Size (PFail): 0 00:35:19.837 Atomic Boundary Offset: 0 00:35:19.837 Maximum Single Source Range Length: 65535 00:35:19.837 Maximum Copy Length: 65535 00:35:19.837 Maximum Source Range Count: 1 00:35:19.837 NGUID/EUI64 Never Reused: No 00:35:19.837 Namespace Write Protected: No 00:35:19.837 Number of LBA Formats: 1 00:35:19.837 Current LBA Format: LBA Format #00 00:35:19.837 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:19.837 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:35:19.837 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:35:19.838 rmmod nvme_rdma 00:35:19.838 rmmod nvme_fabrics 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2329063 ']' 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2329063 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 2329063 ']' 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 2329063 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2329063 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2329063' 00:35:19.838 killing process with pid 2329063 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@968 -- # kill 2329063 00:35:19.838 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@973 -- # wait 2329063 00:35:20.100 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:20.100 14:03:12 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:35:20.100 00:35:20.100 real 0m9.123s 00:35:20.100 user 0m8.667s 00:35:20.100 sys 0m5.725s 00:35:20.100 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:20.100 14:03:12 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:35:20.100 ************************************ 00:35:20.100 END TEST nvmf_identify 00:35:20.100 ************************************ 00:35:20.100 14:03:12 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:35:20.100 14:03:12 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:20.100 14:03:12 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:20.100 14:03:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:35:20.100 ************************************ 00:35:20.100 START TEST nvmf_perf 00:35:20.100 ************************************ 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:35:20.100 * Looking for test storage... 00:35:20.100 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:20.100 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:20.101 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:20.101 14:03:12 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:20.101 14:03:13 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:20.362 14:03:13 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:20.362 14:03:13 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:20.362 14:03:13 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:35:28.507 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:35:28.508 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:35:28.508 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:35:28.508 Found net devices under 0000:98:00.0: mlx_0_0 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:35:28.508 Found net devices under 0000:98:00.1: mlx_0_1 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:28.508 14:03:19 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:35:28.508 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:28.508 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:35:28.508 altname enp152s0f0np0 00:35:28.508 altname ens817f0np0 00:35:28.508 inet 192.168.100.8/24 scope global mlx_0_0 00:35:28.508 valid_lft forever preferred_lft forever 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:35:28.508 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:28.508 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:35:28.508 altname enp152s0f1np1 00:35:28.508 altname ens817f1np1 00:35:28.508 inet 192.168.100.9/24 scope global mlx_0_1 00:35:28.508 valid_lft forever preferred_lft forever 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:28.508 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:35:28.509 192.168.100.9' 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:35:28.509 192.168.100.9' 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:35:28.509 192.168.100.9' 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2333190 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2333190 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 2333190 ']' 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:28.509 14:03:20 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:35:28.509 [2024-06-11 14:03:20.223882] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:35:28.509 [2024-06-11 14:03:20.223951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:28.509 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.509 [2024-06-11 14:03:20.290717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:28.509 [2024-06-11 14:03:20.367325] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:28.509 [2024-06-11 14:03:20.367365] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:28.509 [2024-06-11 14:03:20.367373] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:28.509 [2024-06-11 14:03:20.367379] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:28.509 [2024-06-11 14:03:20.367385] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:28.509 [2024-06-11 14:03:20.367526] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:28.509 [2024-06-11 14:03:20.367653] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:28.509 [2024-06-11 14:03:20.367792] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.509 [2024-06-11 14:03:20.367792] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:28.509 14:03:21 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:28.509 14:03:21 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:35:28.509 14:03:21 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:28.509 14:03:21 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:28.509 14:03:21 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:35:28.509 14:03:21 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:28.509 14:03:21 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:28.509 14:03:21 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:28.770 14:03:21 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:35:28.770 14:03:21 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:35:29.030 14:03:21 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:35:29.030 14:03:21 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:29.030 14:03:21 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:35:29.030 14:03:21 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:35:29.030 14:03:21 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:35:29.030 14:03:21 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:35:29.030 14:03:21 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:35:29.290 [2024-06-11 14:03:22.016214] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:35:29.290 [2024-06-11 14:03:22.046047] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d1d230/0x1e4af00) succeed. 00:35:29.290 [2024-06-11 14:03:22.060795] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d1e870/0x1dcaec0) succeed. 00:35:29.290 14:03:22 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:29.550 14:03:22 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:35:29.550 14:03:22 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:29.811 14:03:22 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:35:29.811 14:03:22 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:29.811 14:03:22 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:30.071 [2024-06-11 14:03:22.832117] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:30.071 14:03:22 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:35:30.331 14:03:23 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:35:30.331 14:03:23 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:35:30.331 14:03:23 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:35:30.331 14:03:23 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:35:31.714 Initializing NVMe Controllers 00:35:31.714 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:35:31.714 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:35:31.714 Initialization complete. Launching workers. 00:35:31.714 ======================================================== 00:35:31.714 Latency(us) 00:35:31.714 Device Information : IOPS MiB/s Average min max 00:35:31.714 PCIE (0000:65:00.0) NSID 1 from core 0: 79223.82 309.47 403.52 13.49 4883.02 00:35:31.714 ======================================================== 00:35:31.714 Total : 79223.82 309.47 403.52 13.49 4883.02 00:35:31.714 00:35:31.714 14:03:24 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:31.714 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.011 Initializing NVMe Controllers 00:35:35.011 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:35.011 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:35.011 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:35.011 Initialization complete. Launching workers. 00:35:35.011 ======================================================== 00:35:35.011 Latency(us) 00:35:35.011 Device Information : IOPS MiB/s Average min max 00:35:35.011 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9772.00 38.17 102.06 37.43 4085.19 00:35:35.011 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7184.00 28.06 138.37 52.44 4110.76 00:35:35.011 ======================================================== 00:35:35.011 Total : 16956.00 66.23 117.45 37.43 4110.76 00:35:35.011 00:35:35.011 14:03:27 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:35.011 EAL: No free 2048 kB hugepages reported on node 1 00:35:38.311 Initializing NVMe Controllers 00:35:38.311 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:38.311 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:38.311 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:38.311 Initialization complete. Launching workers. 00:35:38.311 ======================================================== 00:35:38.311 Latency(us) 00:35:38.311 Device Information : IOPS MiB/s Average min max 00:35:38.311 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 20593.98 80.45 1553.60 395.31 5328.36 00:35:38.311 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7979.33 7073.88 9059.67 00:35:38.311 ======================================================== 00:35:38.311 Total : 24625.98 96.20 2605.68 395.31 9059.67 00:35:38.311 00:35:38.311 14:03:31 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:35:38.311 14:03:31 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:38.311 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.597 Initializing NVMe Controllers 00:35:43.597 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:43.597 Controller IO queue size 128, less than required. 00:35:43.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:43.597 Controller IO queue size 128, less than required. 00:35:43.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:43.597 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:43.597 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:43.597 Initialization complete. Launching workers. 00:35:43.597 ======================================================== 00:35:43.597 Latency(us) 00:35:43.597 Device Information : IOPS MiB/s Average min max 00:35:43.597 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4295.09 1073.77 29859.90 10689.80 79868.74 00:35:43.597 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4359.92 1089.98 29045.05 11352.92 56314.02 00:35:43.597 ======================================================== 00:35:43.597 Total : 8655.00 2163.75 29449.43 10689.80 79868.74 00:35:43.597 00:35:43.597 14:03:35 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:35:43.597 EAL: No free 2048 kB hugepages reported on node 1 00:35:43.597 No valid NVMe controllers or AIO or URING devices found 00:35:43.597 Initializing NVMe Controllers 00:35:43.597 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:43.597 Controller IO queue size 128, less than required. 00:35:43.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:43.597 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:35:43.598 Controller IO queue size 128, less than required. 00:35:43.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:43.598 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:35:43.598 WARNING: Some requested NVMe devices were skipped 00:35:43.598 14:03:35 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:35:43.598 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.805 Initializing NVMe Controllers 00:35:47.805 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:35:47.805 Controller IO queue size 128, less than required. 00:35:47.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:47.805 Controller IO queue size 128, less than required. 00:35:47.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:47.805 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:47.805 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:47.805 Initialization complete. Launching workers. 00:35:47.805 00:35:47.805 ==================== 00:35:47.805 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:35:47.805 RDMA transport: 00:35:47.805 dev name: mlx5_0 00:35:47.805 polls: 266059 00:35:47.805 idle_polls: 261658 00:35:47.805 completions: 53910 00:35:47.805 queued_requests: 1 00:35:47.805 total_send_wrs: 26955 00:35:47.805 send_doorbell_updates: 3992 00:35:47.805 total_recv_wrs: 27082 00:35:47.805 recv_doorbell_updates: 3993 00:35:47.805 --------------------------------- 00:35:47.805 00:35:47.805 ==================== 00:35:47.805 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:35:47.805 RDMA transport: 00:35:47.805 dev name: mlx5_0 00:35:47.805 polls: 266955 00:35:47.805 idle_polls: 266695 00:35:47.805 completions: 17658 00:35:47.805 queued_requests: 1 00:35:47.805 total_send_wrs: 8829 00:35:47.805 send_doorbell_updates: 247 00:35:47.805 total_recv_wrs: 8956 00:35:47.805 recv_doorbell_updates: 248 00:35:47.805 --------------------------------- 00:35:47.805 ======================================================== 00:35:47.805 Latency(us) 00:35:47.805 Device Information : IOPS MiB/s Average min max 00:35:47.805 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6738.50 1684.62 18993.40 8095.29 63729.13 00:35:47.805 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2207.00 551.75 57740.60 31382.29 88077.36 00:35:47.805 ======================================================== 00:35:47.805 Total : 8945.50 2236.37 28552.96 8095.29 88077.36 00:35:47.805 00:35:47.805 14:03:40 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:35:47.805 14:03:40 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:47.805 14:03:40 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:35:47.805 14:03:40 nvmf_rdma.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:35:47.805 14:03:40 nvmf_rdma.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:35:48.745 14:03:41 nvmf_rdma.nvmf_perf -- host/perf.sh@72 -- # ls_guid=a0c992fa-6164-4d7b-801f-26090f22b1ac 00:35:48.745 14:03:41 nvmf_rdma.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb a0c992fa-6164-4d7b-801f-26090f22b1ac 00:35:48.745 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=a0c992fa-6164-4d7b-801f-26090f22b1ac 00:35:48.745 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:35:48.745 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:35:48.745 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:35:48.745 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:49.005 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:35:49.005 { 00:35:49.005 "uuid": "a0c992fa-6164-4d7b-801f-26090f22b1ac", 00:35:49.005 "name": "lvs_0", 00:35:49.005 "base_bdev": "Nvme0n1", 00:35:49.005 "total_data_clusters": 457407, 00:35:49.005 "free_clusters": 457407, 00:35:49.005 "block_size": 512, 00:35:49.005 "cluster_size": 4194304 00:35:49.005 } 00:35:49.005 ]' 00:35:49.005 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="a0c992fa-6164-4d7b-801f-26090f22b1ac") .free_clusters' 00:35:49.005 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=457407 00:35:49.005 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a0c992fa-6164-4d7b-801f-26090f22b1ac") .cluster_size' 00:35:49.005 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:35:49.005 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=1829628 00:35:49.005 14:03:41 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 1829628 00:35:49.005 1829628 00:35:49.005 14:03:41 nvmf_rdma.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:35:49.005 14:03:41 nvmf_rdma.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:35:49.005 14:03:41 nvmf_rdma.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a0c992fa-6164-4d7b-801f-26090f22b1ac lbd_0 20480 00:35:49.264 14:03:41 nvmf_rdma.nvmf_perf -- host/perf.sh@80 -- # lb_guid=a09a23be-12c1-4bbf-83c6-f26651244253 00:35:49.264 14:03:41 nvmf_rdma.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore a09a23be-12c1-4bbf-83c6-f26651244253 lvs_n_0 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=212e07db-4229-4772-97c5-eb361e7386dc 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 212e07db-4229-4772-97c5-eb361e7386dc 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=212e07db-4229-4772-97c5-eb361e7386dc 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:35:51.172 { 00:35:51.172 "uuid": "a0c992fa-6164-4d7b-801f-26090f22b1ac", 00:35:51.172 "name": "lvs_0", 00:35:51.172 "base_bdev": "Nvme0n1", 00:35:51.172 "total_data_clusters": 457407, 00:35:51.172 "free_clusters": 452287, 00:35:51.172 "block_size": 512, 00:35:51.172 "cluster_size": 4194304 00:35:51.172 }, 00:35:51.172 { 00:35:51.172 "uuid": "212e07db-4229-4772-97c5-eb361e7386dc", 00:35:51.172 "name": "lvs_n_0", 00:35:51.172 "base_bdev": "a09a23be-12c1-4bbf-83c6-f26651244253", 00:35:51.172 "total_data_clusters": 5114, 00:35:51.172 "free_clusters": 5114, 00:35:51.172 "block_size": 512, 00:35:51.172 "cluster_size": 4194304 00:35:51.172 } 00:35:51.172 ]' 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="212e07db-4229-4772-97c5-eb361e7386dc") .free_clusters' 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=5114 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="212e07db-4229-4772-97c5-eb361e7386dc") .cluster_size' 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=20456 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 20456 00:35:51.172 20456 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:35:51.172 14:03:43 nvmf_rdma.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 212e07db-4229-4772-97c5-eb361e7386dc lbd_nest_0 20456 00:35:51.172 14:03:44 nvmf_rdma.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=36354949-4716-429f-9bdb-837768ae2cc1 00:35:51.172 14:03:44 nvmf_rdma.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:51.433 14:03:44 nvmf_rdma.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:35:51.433 14:03:44 nvmf_rdma.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 36354949-4716-429f-9bdb-837768ae2cc1 00:35:51.693 14:03:44 nvmf_rdma.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:51.693 14:03:44 nvmf_rdma.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:35:51.693 14:03:44 nvmf_rdma.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:35:51.693 14:03:44 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:35:51.693 14:03:44 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:51.693 14:03:44 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:35:51.693 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.946 Initializing NVMe Controllers 00:36:03.946 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:03.946 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:03.946 Initialization complete. Launching workers. 00:36:03.946 ======================================================== 00:36:03.946 Latency(us) 00:36:03.946 Device Information : IOPS MiB/s Average min max 00:36:03.946 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6466.81 3.16 154.16 62.08 8053.99 00:36:03.946 ======================================================== 00:36:03.946 Total : 6466.81 3.16 154.16 62.08 8053.99 00:36:03.946 00:36:03.946 14:03:55 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:36:03.946 14:03:55 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:03.946 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.173 Initializing NVMe Controllers 00:36:16.173 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:16.173 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:16.173 Initialization complete. Launching workers. 00:36:16.173 ======================================================== 00:36:16.173 Latency(us) 00:36:16.173 Device Information : IOPS MiB/s Average min max 00:36:16.173 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3032.50 379.06 329.36 132.59 8071.13 00:36:16.173 ======================================================== 00:36:16.174 Total : 3032.50 379.06 329.36 132.59 8071.13 00:36:16.174 00:36:16.174 14:04:07 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:36:16.174 14:04:07 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:36:16.174 14:04:07 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:16.174 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.167 Initializing NVMe Controllers 00:36:26.167 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:26.167 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:26.167 Initialization complete. Launching workers. 00:36:26.167 ======================================================== 00:36:26.167 Latency(us) 00:36:26.167 Device Information : IOPS MiB/s Average min max 00:36:26.167 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12651.80 6.18 2528.99 595.05 9501.81 00:36:26.167 ======================================================== 00:36:26.167 Total : 12651.80 6.18 2528.99 595.05 9501.81 00:36:26.167 00:36:26.167 14:04:18 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:36:26.167 14:04:18 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:26.167 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.395 Initializing NVMe Controllers 00:36:38.395 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:38.395 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:38.395 Initialization complete. Launching workers. 00:36:38.395 ======================================================== 00:36:38.395 Latency(us) 00:36:38.395 Device Information : IOPS MiB/s Average min max 00:36:38.395 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4000.02 500.00 8005.70 4892.02 16036.22 00:36:38.395 ======================================================== 00:36:38.395 Total : 4000.02 500.00 8005.70 4892.02 16036.22 00:36:38.395 00:36:38.395 14:04:30 nvmf_rdma.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:36:38.395 14:04:30 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:36:38.395 14:04:30 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:38.395 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.621 Initializing NVMe Controllers 00:36:50.621 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:50.621 Controller IO queue size 128, less than required. 00:36:50.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:50.621 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:50.621 Initialization complete. Launching workers. 00:36:50.621 ======================================================== 00:36:50.621 Latency(us) 00:36:50.621 Device Information : IOPS MiB/s Average min max 00:36:50.621 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 20357.14 9.94 6290.88 1767.50 13007.30 00:36:50.621 ======================================================== 00:36:50.621 Total : 20357.14 9.94 6290.88 1767.50 13007.30 00:36:50.621 00:36:50.621 14:04:41 nvmf_rdma.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:36:50.621 14:04:41 nvmf_rdma.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:50.621 EAL: No free 2048 kB hugepages reported on node 1 00:37:00.622 Initializing NVMe Controllers 00:37:00.622 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:37:00.622 Controller IO queue size 128, less than required. 00:37:00.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:00.622 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:00.622 Initialization complete. Launching workers. 00:37:00.622 ======================================================== 00:37:00.622 Latency(us) 00:37:00.622 Device Information : IOPS MiB/s Average min max 00:37:00.622 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12417.90 1552.24 10309.94 3381.24 22141.07 00:37:00.622 ======================================================== 00:37:00.622 Total : 12417.90 1552.24 10309.94 3381.24 22141.07 00:37:00.622 00:37:00.622 14:04:52 nvmf_rdma.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:00.622 14:04:53 nvmf_rdma.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 36354949-4716-429f-9bdb-837768ae2cc1 00:37:02.008 14:04:54 nvmf_rdma.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:37:02.008 14:04:54 nvmf_rdma.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a09a23be-12c1-4bbf-83c6-f26651244253 00:37:02.269 14:04:55 nvmf_rdma.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:37:02.529 rmmod nvme_rdma 00:37:02.529 rmmod nvme_fabrics 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2333190 ']' 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2333190 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 2333190 ']' 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 2333190 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2333190 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2333190' 00:37:02.529 killing process with pid 2333190 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@968 -- # kill 2333190 00:37:02.529 14:04:55 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@973 -- # wait 2333190 00:37:05.075 14:04:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:05.075 14:04:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:37:05.075 00:37:05.075 real 1m44.525s 00:37:05.075 user 6m32.510s 00:37:05.075 sys 0m6.820s 00:37:05.075 14:04:57 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:05.075 14:04:57 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:37:05.075 ************************************ 00:37:05.075 END TEST nvmf_perf 00:37:05.075 ************************************ 00:37:05.075 14:04:57 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:37:05.075 14:04:57 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:05.075 14:04:57 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:05.075 14:04:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:05.075 ************************************ 00:37:05.075 START TEST nvmf_fio_host 00:37:05.075 ************************************ 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:37:05.075 * Looking for test storage... 00:37:05.075 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:05.075 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:37:05.076 14:04:57 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:37:11.689 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:37:11.689 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:37:11.689 Found net devices under 0000:98:00.0: mlx_0_0 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:37:11.689 Found net devices under 0000:98:00.1: mlx_0_1 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:37:11.689 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:11.689 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:37:11.689 altname enp152s0f0np0 00:37:11.689 altname ens817f0np0 00:37:11.689 inet 192.168.100.8/24 scope global mlx_0_0 00:37:11.689 valid_lft forever preferred_lft forever 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:11.689 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:37:11.690 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:11.690 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:37:11.690 altname enp152s0f1np1 00:37:11.690 altname ens817f1np1 00:37:11.690 inet 192.168.100.9/24 scope global mlx_0_1 00:37:11.690 valid_lft forever preferred_lft forever 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:37:11.690 192.168.100.9' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:37:11.690 192.168.100.9' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:37:11.690 192.168.100.9' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2355202 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2355202 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 2355202 ']' 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:37:11.690 14:05:04 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:11.690 [2024-06-11 14:05:04.489298] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:37:11.690 [2024-06-11 14:05:04.489367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:11.690 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.690 [2024-06-11 14:05:04.556042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:11.950 [2024-06-11 14:05:04.631286] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:11.950 [2024-06-11 14:05:04.631326] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:11.950 [2024-06-11 14:05:04.631334] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:11.950 [2024-06-11 14:05:04.631341] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:11.950 [2024-06-11 14:05:04.631350] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:11.950 [2024-06-11 14:05:04.631488] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.950 [2024-06-11 14:05:04.631607] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:37:11.950 [2024-06-11 14:05:04.631764] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.950 [2024-06-11 14:05:04.631765] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:37:12.519 14:05:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:12.519 14:05:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:37:12.520 14:05:05 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:37:12.780 [2024-06-11 14:05:05.435953] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2247e90/0x224c380) succeed. 00:37:12.780 [2024-06-11 14:05:05.449719] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22494d0/0x228da10) succeed. 00:37:12.780 14:05:05 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:37:12.780 14:05:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:12.780 14:05:05 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:37:12.781 14:05:05 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:37:13.041 Malloc1 00:37:13.041 14:05:05 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:13.302 14:05:05 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:13.302 14:05:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:13.561 [2024-06-11 14:05:06.287642] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:13.561 14:05:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:37:13.847 14:05:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:37:14.107 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:37:14.108 fio-3.35 00:37:14.108 Starting 1 thread 00:37:14.108 EAL: No free 2048 kB hugepages reported on node 1 00:37:16.659 00:37:16.659 test: (groupid=0, jobs=1): err= 0: pid=2355773: Tue Jun 11 14:05:09 2024 00:37:16.659 read: IOPS=19.2k, BW=75.0MiB/s (78.6MB/s)(150MiB/2003msec) 00:37:16.659 slat (nsec): min=2044, max=32050, avg=2129.74, stdev=497.30 00:37:16.659 clat (usec): min=2326, max=5612, avg=3317.89, stdev=546.73 00:37:16.659 lat (usec): min=2358, max=5614, avg=3320.02, stdev=546.79 00:37:16.659 clat percentiles (usec): 00:37:16.659 | 1.00th=[ 2769], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3032], 00:37:16.659 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3064], 00:37:16.659 | 70.00th=[ 3064], 80.00th=[ 3326], 90.00th=[ 4424], 95.00th=[ 4424], 00:37:16.659 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 4883], 99.95th=[ 5145], 00:37:16.659 | 99.99th=[ 5538] 00:37:16.659 bw ( KiB/s): min=56776, max=84016, per=99.98%, avg=76750.00, stdev=13338.31, samples=4 00:37:16.659 iops : min=14194, max=21004, avg=19187.50, stdev=3334.58, samples=4 00:37:16.659 write: IOPS=19.2k, BW=74.9MiB/s (78.5MB/s)(150MiB/2003msec); 0 zone resets 00:37:16.659 slat (nsec): min=2115, max=72214, avg=2238.72, stdev=628.57 00:37:16.659 clat (usec): min=2714, max=5620, avg=3318.38, stdev=547.82 00:37:16.659 lat (usec): min=2716, max=5623, avg=3320.62, stdev=547.88 00:37:16.659 clat percentiles (usec): 00:37:16.659 | 1.00th=[ 2769], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3032], 00:37:16.659 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3064], 00:37:16.659 | 70.00th=[ 3064], 80.00th=[ 3621], 90.00th=[ 4424], 95.00th=[ 4424], 00:37:16.659 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 4883], 99.95th=[ 5145], 00:37:16.659 | 99.99th=[ 5538] 00:37:16.659 bw ( KiB/s): min=57112, max=84072, per=99.99%, avg=76676.00, stdev=13078.17, samples=4 00:37:16.659 iops : min=14278, max=21018, avg=19169.00, stdev=3269.54, samples=4 00:37:16.659 lat (msec) : 4=80.73%, 10=19.27% 00:37:16.659 cpu : usr=99.55%, sys=0.00%, ctx=16, majf=0, minf=3 00:37:16.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:37:16.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:16.659 issued rwts: total=38439,38401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:16.659 00:37:16.659 Run status group 0 (all jobs): 00:37:16.659 READ: bw=75.0MiB/s (78.6MB/s), 75.0MiB/s-75.0MiB/s (78.6MB/s-78.6MB/s), io=150MiB (157MB), run=2003-2003msec 00:37:16.659 WRITE: bw=74.9MiB/s (78.5MB/s), 74.9MiB/s-74.9MiB/s (78.5MB/s-78.5MB/s), io=150MiB (157MB), run=2003-2003msec 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:16.659 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:16.660 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:16.660 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:16.660 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:16.660 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:16.660 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:16.660 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:16.660 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:37:16.660 14:05:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:37:16.919 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:37:16.919 fio-3.35 00:37:16.919 Starting 1 thread 00:37:16.919 EAL: No free 2048 kB hugepages reported on node 1 00:37:19.462 00:37:19.462 test: (groupid=0, jobs=1): err= 0: pid=2356548: Tue Jun 11 14:05:11 2024 00:37:19.462 read: IOPS=12.9k, BW=202MiB/s (212MB/s)(399MiB/1974msec) 00:37:19.462 slat (nsec): min=3418, max=47612, avg=3668.87, stdev=1292.35 00:37:19.462 clat (usec): min=304, max=11435, avg=3020.41, stdev=1704.67 00:37:19.462 lat (usec): min=307, max=11459, avg=3024.08, stdev=1704.96 00:37:19.462 clat percentiles (usec): 00:37:19.462 | 1.00th=[ 709], 5.00th=[ 1123], 10.00th=[ 1319], 20.00th=[ 1598], 00:37:19.462 | 30.00th=[ 1860], 40.00th=[ 2147], 50.00th=[ 2507], 60.00th=[ 2999], 00:37:19.462 | 70.00th=[ 3523], 80.00th=[ 4359], 90.00th=[ 5800], 95.00th=[ 6325], 00:37:19.462 | 99.00th=[ 7898], 99.50th=[ 8291], 99.90th=[ 8717], 99.95th=[10159], 00:37:19.462 | 99.99th=[11338] 00:37:19.462 bw ( KiB/s): min=91712, max=110112, per=49.46%, avg=102424.00, stdev=8168.06, samples=4 00:37:19.462 iops : min= 5732, max= 6882, avg=6401.50, stdev=510.50, samples=4 00:37:19.462 write: IOPS=7196, BW=112MiB/s (118MB/s)(208MiB/1846msec); 0 zone resets 00:37:19.462 slat (usec): min=39, max=516, avg=41.08, stdev= 8.34 00:37:19.462 clat (usec): min=1182, max=24064, avg=11614.28, stdev=5047.60 00:37:19.462 lat (usec): min=1222, max=24108, avg=11655.35, stdev=5047.76 00:37:19.462 clat percentiles (usec): 00:37:19.462 | 1.00th=[ 2409], 5.00th=[ 3425], 10.00th=[ 4752], 20.00th=[ 6783], 00:37:19.462 | 30.00th=[ 8160], 40.00th=[ 9372], 50.00th=[11076], 60.00th=[14353], 00:37:19.462 | 70.00th=[15664], 80.00th=[16712], 90.00th=[17957], 95.00th=[19006], 00:37:19.462 | 99.00th=[20317], 99.50th=[21103], 99.90th=[22938], 99.95th=[23725], 00:37:19.462 | 99.99th=[23987] 00:37:19.462 bw ( KiB/s): min=95424, max=111776, per=91.60%, avg=105464.00, stdev=7431.65, samples=4 00:37:19.462 iops : min= 5964, max= 6986, avg=6591.50, stdev=464.48, samples=4 00:37:19.462 lat (usec) : 500=0.16%, 750=0.61%, 1000=1.49% 00:37:19.462 lat (msec) : 2=20.91%, 4=29.48%, 10=28.28%, 20=18.52%, 50=0.55% 00:37:19.462 cpu : usr=97.15%, sys=0.75%, ctx=183, majf=0, minf=8 00:37:19.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:37:19.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:19.462 issued rwts: total=25548,13284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:19.462 00:37:19.462 Run status group 0 (all jobs): 00:37:19.462 READ: bw=202MiB/s (212MB/s), 202MiB/s-202MiB/s (212MB/s-212MB/s), io=399MiB (419MB), run=1974-1974msec 00:37:19.462 WRITE: bw=112MiB/s (118MB/s), 112MiB/s-112MiB/s (118MB/s-118MB/s), io=208MiB (218MB), run=1846-1846msec 00:37:19.462 14:05:11 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:19.462 14:05:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:37:19.462 14:05:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:37:19.462 14:05:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:37:19.462 14:05:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1512 -- # bdfs=() 00:37:19.462 14:05:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1512 -- # local bdfs 00:37:19.462 14:05:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:19.462 14:05:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:19.462 14:05:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:37:19.462 14:05:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:37:19.462 14:05:12 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:37:19.462 14:05:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 192.168.100.8 00:37:20.034 Nvme0n1 00:37:20.034 14:05:12 nvmf_rdma.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:37:20.605 14:05:13 nvmf_rdma.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=149d8437-6bda-40b5-80c7-d4a9d367ad02 00:37:20.605 14:05:13 nvmf_rdma.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 149d8437-6bda-40b5-80c7-d4a9d367ad02 00:37:20.605 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=149d8437-6bda-40b5-80c7-d4a9d367ad02 00:37:20.605 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:37:20.605 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:37:20.605 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:37:20.605 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:20.605 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:37:20.605 { 00:37:20.605 "uuid": "149d8437-6bda-40b5-80c7-d4a9d367ad02", 00:37:20.605 "name": "lvs_0", 00:37:20.605 "base_bdev": "Nvme0n1", 00:37:20.605 "total_data_clusters": 1787, 00:37:20.605 "free_clusters": 1787, 00:37:20.605 "block_size": 512, 00:37:20.605 "cluster_size": 1073741824 00:37:20.605 } 00:37:20.605 ]' 00:37:20.605 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="149d8437-6bda-40b5-80c7-d4a9d367ad02") .free_clusters' 00:37:20.605 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=1787 00:37:20.605 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="149d8437-6bda-40b5-80c7-d4a9d367ad02") .cluster_size' 00:37:20.865 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=1073741824 00:37:20.865 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1829888 00:37:20.865 14:05:13 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1829888 00:37:20.865 1829888 00:37:20.865 14:05:13 nvmf_rdma.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:37:20.865 c00ff4c1-8069-4802-85ea-dbd71a298077 00:37:20.865 14:05:13 nvmf_rdma.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:37:21.126 14:05:13 nvmf_rdma.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:37:21.386 14:05:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:37:21.386 14:05:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:37:21.386 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:37:21.386 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:21.386 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:21.386 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:21.386 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:21.386 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:37:21.386 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:21.386 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:21.386 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:37:21.387 14:05:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:37:21.961 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:37:21.961 fio-3.35 00:37:21.961 Starting 1 thread 00:37:21.961 EAL: No free 2048 kB hugepages reported on node 1 00:37:24.509 00:37:24.509 test: (groupid=0, jobs=1): err= 0: pid=2357621: Tue Jun 11 14:05:16 2024 00:37:24.509 read: IOPS=13.6k, BW=53.2MiB/s (55.8MB/s)(107MiB/2004msec) 00:37:24.509 slat (nsec): min=2049, max=20457, avg=2171.05, stdev=210.75 00:37:24.509 clat (usec): min=2369, max=8414, avg=4660.64, stdev=169.41 00:37:24.509 lat (usec): min=2381, max=8416, avg=4662.81, stdev=169.39 00:37:24.509 clat percentiles (usec): 00:37:24.509 | 1.00th=[ 3949], 5.00th=[ 4621], 10.00th=[ 4621], 20.00th=[ 4621], 00:37:24.509 | 30.00th=[ 4621], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4686], 00:37:24.509 | 70.00th=[ 4686], 80.00th=[ 4686], 90.00th=[ 4686], 95.00th=[ 4686], 00:37:24.509 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 6259], 99.95th=[ 7373], 00:37:24.509 | 99.99th=[ 8455] 00:37:24.509 bw ( KiB/s): min=52656, max=55144, per=99.95%, avg=54446.00, stdev=1196.62, samples=4 00:37:24.509 iops : min=13164, max=13786, avg=13611.50, stdev=299.16, samples=4 00:37:24.509 write: IOPS=13.6k, BW=53.1MiB/s (55.7MB/s)(107MiB/2004msec); 0 zone resets 00:37:24.509 slat (nsec): min=2120, max=12168, avg=2286.77, stdev=215.18 00:37:24.509 clat (usec): min=2376, max=8423, avg=4645.22, stdev=159.02 00:37:24.509 lat (usec): min=2381, max=8425, avg=4647.51, stdev=159.00 00:37:24.509 clat percentiles (usec): 00:37:24.509 | 1.00th=[ 3949], 5.00th=[ 4621], 10.00th=[ 4621], 20.00th=[ 4621], 00:37:24.509 | 30.00th=[ 4621], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:37:24.509 | 70.00th=[ 4686], 80.00th=[ 4686], 90.00th=[ 4686], 95.00th=[ 4686], 00:37:24.509 | 99.00th=[ 5342], 99.50th=[ 5342], 99.90th=[ 6063], 99.95th=[ 7308], 00:37:24.509 | 99.99th=[ 8225] 00:37:24.509 bw ( KiB/s): min=52888, max=55144, per=99.98%, avg=54410.00, stdev=1033.59, samples=4 00:37:24.509 iops : min=13222, max=13786, avg=13602.50, stdev=258.40, samples=4 00:37:24.509 lat (msec) : 4=1.57%, 10=98.43% 00:37:24.509 cpu : usr=99.65%, sys=0.05%, ctx=16, majf=0, minf=9 00:37:24.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:37:24.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:24.509 issued rwts: total=27291,27264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:24.509 00:37:24.509 Run status group 0 (all jobs): 00:37:24.509 READ: bw=53.2MiB/s (55.8MB/s), 53.2MiB/s-53.2MiB/s (55.8MB/s-55.8MB/s), io=107MiB (112MB), run=2004-2004msec 00:37:24.509 WRITE: bw=53.1MiB/s (55.7MB/s), 53.1MiB/s-53.1MiB/s (55.7MB/s-55.7MB/s), io=107MiB (112MB), run=2004-2004msec 00:37:24.509 14:05:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:24.509 14:05:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:37:25.081 14:05:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=036497d4-98a5-468c-99c7-217ef90c544f 00:37:25.081 14:05:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 036497d4-98a5-468c-99c7-217ef90c544f 00:37:25.081 14:05:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=036497d4-98a5-468c-99c7-217ef90c544f 00:37:25.081 14:05:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:37:25.081 14:05:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:37:25.081 14:05:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:37:25.081 14:05:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:25.341 14:05:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:37:25.341 { 00:37:25.341 "uuid": "149d8437-6bda-40b5-80c7-d4a9d367ad02", 00:37:25.341 "name": "lvs_0", 00:37:25.341 "base_bdev": "Nvme0n1", 00:37:25.341 "total_data_clusters": 1787, 00:37:25.341 "free_clusters": 0, 00:37:25.341 "block_size": 512, 00:37:25.341 "cluster_size": 1073741824 00:37:25.341 }, 00:37:25.341 { 00:37:25.341 "uuid": "036497d4-98a5-468c-99c7-217ef90c544f", 00:37:25.341 "name": "lvs_n_0", 00:37:25.341 "base_bdev": "c00ff4c1-8069-4802-85ea-dbd71a298077", 00:37:25.341 "total_data_clusters": 457025, 00:37:25.341 "free_clusters": 457025, 00:37:25.341 "block_size": 512, 00:37:25.341 "cluster_size": 4194304 00:37:25.341 } 00:37:25.341 ]' 00:37:25.341 14:05:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="036497d4-98a5-468c-99c7-217ef90c544f") .free_clusters' 00:37:25.341 14:05:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=457025 00:37:25.341 14:05:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="036497d4-98a5-468c-99c7-217ef90c544f") .cluster_size' 00:37:25.341 14:05:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=4194304 00:37:25.341 14:05:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1828100 00:37:25.341 14:05:18 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1828100 00:37:25.341 1828100 00:37:25.341 14:05:18 nvmf_rdma.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:37:26.728 bc60a3e8-5472-490e-9d92-b63b852d2b48 00:37:26.728 14:05:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:37:26.728 14:05:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:37:26.728 14:05:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:37:26.989 14:05:19 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:37:27.249 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:37:27.249 fio-3.35 00:37:27.249 Starting 1 thread 00:37:27.249 EAL: No free 2048 kB hugepages reported on node 1 00:37:29.791 00:37:29.791 test: (groupid=0, jobs=1): err= 0: pid=2358887: Tue Jun 11 14:05:22 2024 00:37:29.791 read: IOPS=8071, BW=31.5MiB/s (33.1MB/s)(63.3MiB/2007msec) 00:37:29.791 slat (nsec): min=2056, max=20424, avg=2137.87, stdev=250.62 00:37:29.791 clat (usec): min=4007, max=14510, avg=7890.95, stdev=300.88 00:37:29.791 lat (usec): min=4016, max=14512, avg=7893.09, stdev=300.84 00:37:29.791 clat percentiles (usec): 00:37:29.791 | 1.00th=[ 7046], 5.00th=[ 7832], 10.00th=[ 7832], 20.00th=[ 7832], 00:37:29.791 | 30.00th=[ 7898], 40.00th=[ 7898], 50.00th=[ 7898], 60.00th=[ 7898], 00:37:29.791 | 70.00th=[ 7898], 80.00th=[ 7898], 90.00th=[ 7963], 95.00th=[ 7963], 00:37:29.791 | 99.00th=[ 8717], 99.50th=[ 8717], 99.90th=[11731], 99.95th=[13566], 00:37:29.791 | 99.99th=[13566] 00:37:29.791 bw ( KiB/s): min=31312, max=32744, per=100.00%, avg=32292.00, stdev=667.05, samples=4 00:37:29.791 iops : min= 7828, max= 8186, avg=8073.00, stdev=166.76, samples=4 00:37:29.791 write: IOPS=8063, BW=31.5MiB/s (33.0MB/s)(63.2MiB/2007msec); 0 zone resets 00:37:29.791 slat (nsec): min=2128, max=8709, avg=2266.81, stdev=216.29 00:37:29.791 clat (usec): min=4011, max=13593, avg=7878.38, stdev=300.96 00:37:29.791 lat (usec): min=4016, max=13595, avg=7880.65, stdev=300.92 00:37:29.791 clat percentiles (usec): 00:37:29.791 | 1.00th=[ 6980], 5.00th=[ 7832], 10.00th=[ 7832], 20.00th=[ 7832], 00:37:29.791 | 30.00th=[ 7832], 40.00th=[ 7898], 50.00th=[ 7898], 60.00th=[ 7898], 00:37:29.791 | 70.00th=[ 7898], 80.00th=[ 7898], 90.00th=[ 7963], 95.00th=[ 7963], 00:37:29.791 | 99.00th=[ 8717], 99.50th=[ 8717], 99.90th=[11600], 99.95th=[12649], 00:37:29.791 | 99.99th=[13566] 00:37:29.791 bw ( KiB/s): min=32048, max=32392, per=99.89%, avg=32220.00, stdev=140.78, samples=4 00:37:29.791 iops : min= 8012, max= 8098, avg=8055.00, stdev=35.19, samples=4 00:37:29.791 lat (msec) : 10=99.80%, 20=0.20% 00:37:29.791 cpu : usr=99.50%, sys=0.15%, ctx=15, majf=0, minf=9 00:37:29.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:37:29.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:29.791 issued rwts: total=16199,16184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:29.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:29.791 00:37:29.791 Run status group 0 (all jobs): 00:37:29.791 READ: bw=31.5MiB/s (33.1MB/s), 31.5MiB/s-31.5MiB/s (33.1MB/s-33.1MB/s), io=63.3MiB (66.4MB), run=2007-2007msec 00:37:29.791 WRITE: bw=31.5MiB/s (33.0MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.0MB/s), io=63.2MiB (66.3MB), run=2007-2007msec 00:37:29.791 14:05:22 nvmf_rdma.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:37:30.051 14:05:22 nvmf_rdma.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:37:30.051 14:05:22 nvmf_rdma.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:37:31.963 14:05:24 nvmf_rdma.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:37:32.223 14:05:24 nvmf_rdma.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:37:32.794 14:05:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:37:33.054 14:05:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:37:34.971 rmmod nvme_rdma 00:37:34.971 rmmod nvme_fabrics 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2355202 ']' 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2355202 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 2355202 ']' 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 2355202 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2355202 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2355202' 00:37:34.971 killing process with pid 2355202 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 2355202 00:37:34.971 14:05:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 2355202 00:37:35.316 14:05:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:35.316 14:05:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:37:35.316 00:37:35.316 real 0m30.574s 00:37:35.316 user 2m43.383s 00:37:35.316 sys 0m7.162s 00:37:35.316 14:05:28 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:35.316 14:05:28 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:37:35.316 ************************************ 00:37:35.316 END TEST nvmf_fio_host 00:37:35.316 ************************************ 00:37:35.316 14:05:28 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:37:35.316 14:05:28 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:35.316 14:05:28 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:35.316 14:05:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:35.316 ************************************ 00:37:35.316 START TEST nvmf_failover 00:37:35.316 ************************************ 00:37:35.316 14:05:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:37:35.605 * Looking for test storage... 00:37:35.605 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:37:35.605 14:05:28 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:43.754 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:43.754 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:37:43.754 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:43.754 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:37:43.755 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:37:43.755 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:37:43.755 Found net devices under 0000:98:00.0: mlx_0_0 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:37:43.755 Found net devices under 0000:98:00.1: mlx_0_1 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:37:43.755 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:43.755 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:37:43.755 altname enp152s0f0np0 00:37:43.755 altname ens817f0np0 00:37:43.755 inet 192.168.100.8/24 scope global mlx_0_0 00:37:43.755 valid_lft forever preferred_lft forever 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:37:43.755 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:37:43.755 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:37:43.755 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:37:43.755 altname enp152s0f1np1 00:37:43.755 altname ens817f1np1 00:37:43.755 inet 192.168.100.9/24 scope global mlx_0_1 00:37:43.755 valid_lft forever preferred_lft forever 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:37:43.756 192.168.100.9' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:37:43.756 192.168.100.9' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:37:43.756 192.168.100.9' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2364033 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2364033 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2364033 ']' 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:43.756 14:05:35 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:43.756 [2024-06-11 14:05:35.413852] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:37:43.756 [2024-06-11 14:05:35.413921] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:43.756 EAL: No free 2048 kB hugepages reported on node 1 00:37:43.756 [2024-06-11 14:05:35.495523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:43.756 [2024-06-11 14:05:35.588540] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:43.756 [2024-06-11 14:05:35.588601] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:43.756 [2024-06-11 14:05:35.588609] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:43.756 [2024-06-11 14:05:35.588615] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:43.756 [2024-06-11 14:05:35.588621] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:43.756 [2024-06-11 14:05:35.588760] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:37:43.756 [2024-06-11 14:05:35.588916] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:43.756 [2024-06-11 14:05:35.588917] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:37:43.756 14:05:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:43.756 14:05:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:37:43.756 14:05:36 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:43.756 14:05:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:43.756 14:05:36 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:43.756 14:05:36 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:43.756 14:05:36 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:37:43.756 [2024-06-11 14:05:36.411100] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdc65d0/0xdcaac0) succeed. 00:37:43.756 [2024-06-11 14:05:36.423997] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdc7b70/0xe0c150) succeed. 00:37:43.756 14:05:36 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:37:44.017 Malloc0 00:37:44.017 14:05:36 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:44.017 14:05:36 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:44.278 14:05:37 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:44.538 [2024-06-11 14:05:37.203814] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:44.538 14:05:37 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:37:44.538 [2024-06-11 14:05:37.363965] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:37:44.538 14:05:37 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:37:44.799 [2024-06-11 14:05:37.524502] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:37:44.799 14:05:37 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:37:44.799 14:05:37 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2364399 00:37:44.799 14:05:37 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:44.799 14:05:37 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2364399 /var/tmp/bdevperf.sock 00:37:44.799 14:05:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2364399 ']' 00:37:44.799 14:05:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:44.799 14:05:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:44.799 14:05:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:44.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:44.799 14:05:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:44.799 14:05:37 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:45.741 14:05:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:45.741 14:05:38 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:37:45.741 14:05:38 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:45.741 NVMe0n1 00:37:45.741 14:05:38 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:46.001 00:37:46.001 14:05:38 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2364734 00:37:46.001 14:05:38 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:46.001 14:05:38 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:37:46.941 14:05:39 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:47.201 14:05:40 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:37:50.526 14:05:43 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:50.526 00:37:50.526 14:05:43 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:37:50.789 14:05:43 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:37:54.087 14:05:46 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:37:54.087 [2024-06-11 14:05:46.608389] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:37:54.087 14:05:46 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:37:55.028 14:05:47 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:37:55.028 14:05:47 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 2364734 00:38:01.618 0 00:38:01.618 14:05:53 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 2364399 00:38:01.618 14:05:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2364399 ']' 00:38:01.618 14:05:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2364399 00:38:01.618 14:05:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:38:01.618 14:05:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:01.618 14:05:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2364399 00:38:01.618 14:05:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:01.618 14:05:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:01.618 14:05:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2364399' 00:38:01.618 killing process with pid 2364399 00:38:01.618 14:05:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2364399 00:38:01.618 14:05:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2364399 00:38:01.618 14:05:54 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:01.618 [2024-06-11 14:05:37.593467] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:38:01.618 [2024-06-11 14:05:37.593523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364399 ] 00:38:01.618 EAL: No free 2048 kB hugepages reported on node 1 00:38:01.618 [2024-06-11 14:05:37.652937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.618 [2024-06-11 14:05:37.717178] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.618 Running I/O for 15 seconds... 00:38:01.618 [2024-06-11 14:05:40.991866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x186e00 00:38:01.618 [2024-06-11 14:05:40.991912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.618 [2024-06-11 14:05:40.991932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x186e00 00:38:01.618 [2024-06-11 14:05:40.991941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.618 [2024-06-11 14:05:40.991951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x186e00 00:38:01.618 [2024-06-11 14:05:40.991958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.618 [2024-06-11 14:05:40.991968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x186e00 00:38:01.618 [2024-06-11 14:05:40.991975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.618 [2024-06-11 14:05:40.991984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x186e00 00:38:01.618 [2024-06-11 14:05:40.991991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.618 [2024-06-11 14:05:40.992001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x186e00 00:38:01.618 [2024-06-11 14:05:40.992008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.618 [2024-06-11 14:05:40.992021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x186e00 00:38:01.618 [2024-06-11 14:05:40.992028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.618 [2024-06-11 14:05:40.992037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x186e00 00:38:01.618 [2024-06-11 14:05:40.992044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.618 [2024-06-11 14:05:40.992054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.619 [2024-06-11 14:05:40.992661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x186e00 00:38:01.619 [2024-06-11 14:05:40.992668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.992987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.992994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x186e00 00:38:01.620 [2024-06-11 14:05:40.993267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.620 [2024-06-11 14:05:40.993277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x186e00 00:38:01.621 [2024-06-11 14:05:40.993860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.621 [2024-06-11 14:05:40.993869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x186e00 00:38:01.622 [2024-06-11 14:05:40.993876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.001497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x186e00 00:38:01.622 [2024-06-11 14:05:41.001525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.001536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x186e00 00:38:01.622 [2024-06-11 14:05:41.001544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.001554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x186e00 00:38:01.622 [2024-06-11 14:05:41.001561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.001570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x186e00 00:38:01.622 [2024-06-11 14:05:41.001577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.001591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x186e00 00:38:01.622 [2024-06-11 14:05:41.001598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.001608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x186e00 00:38:01.622 [2024-06-11 14:05:41.001615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.001624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x186e00 00:38:01.622 [2024-06-11 14:05:41.001631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.001640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x186e00 00:38:01.622 [2024-06-11 14:05:41.001647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.001657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x186e00 00:38:01.622 [2024-06-11 14:05:41.001664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.004249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:01.622 [2024-06-11 14:05:41.004274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:01.622 [2024-06-11 14:05:41.004283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13304 len:8 PRP1 0x0 PRP2 0x0 00:38:01.622 [2024-06-11 14:05:41.004292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.004329] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:38:01.622 [2024-06-11 14:05:41.004340] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:38:01.622 [2024-06-11 14:05:41.004347] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.622 [2024-06-11 14:05:41.004392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:01.622 [2024-06-11 14:05:41.004402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.004412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:01.622 [2024-06-11 14:05:41.004419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.004426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:01.622 [2024-06-11 14:05:41.004433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.004441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:38:01.622 [2024-06-11 14:05:41.004448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:41.024503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:38:01.622 [2024-06-11 14:05:41.024521] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:38:01.622 [2024-06-11 14:05:41.024528] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:38:01.622 [2024-06-11 14:05:41.028166] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.622 [2024-06-11 14:05:41.085195] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:01.622 [2024-06-11 14:05:44.445314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.622 [2024-06-11 14:05:44.445631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.622 [2024-06-11 14:05:44.445640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x186e00 00:38:01.623 [2024-06-11 14:05:44.445665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x186e00 00:38:01.623 [2024-06-11 14:05:44.445682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x186e00 00:38:01.623 [2024-06-11 14:05:44.445698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x186e00 00:38:01.623 [2024-06-11 14:05:44.445847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x186e00 00:38:01.623 [2024-06-11 14:05:44.445863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x186e00 00:38:01.623 [2024-06-11 14:05:44.445879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x186e00 00:38:01.623 [2024-06-11 14:05:44.445896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186e00 00:38:01.623 [2024-06-11 14:05:44.445912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.445986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.445994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.623 [2024-06-11 14:05:44.446003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.623 [2024-06-11 14:05:44.446010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x186e00 00:38:01.624 [2024-06-11 14:05:44.446409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.624 [2024-06-11 14:05:44.446521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.624 [2024-06-11 14:05:44.446530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.446800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.446987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.446994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.447003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.447009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.447021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.447028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.447038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.447045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.447053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.447060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.447069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.447077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.447087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x186e00 00:38:01.625 [2024-06-11 14:05:44.447094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.447103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.447110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.447119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.447126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.625 [2024-06-11 14:05:44.447135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.625 [2024-06-11 14:05:44.447143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:44.447354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:44.447371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:44.447387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:44.447403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:44.447420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.447429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:44.447436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.449794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:01.626 [2024-06-11 14:05:44.449805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:01.626 [2024-06-11 14:05:44.449812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80368 len:8 PRP1 0x0 PRP2 0x0 00:38:01.626 [2024-06-11 14:05:44.449820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:44.449851] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:38:01.626 [2024-06-11 14:05:44.449863] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:38:01.626 [2024-06-11 14:05:44.449871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.626 [2024-06-11 14:05:44.453429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.626 [2024-06-11 14:05:44.473278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:38:01.626 [2024-06-11 14:05:44.538182] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:01.626 [2024-06-11 14:05:48.794285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:48.794324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:48.794351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:48.794368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:48.794384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:48.794400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:48.794417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:48.794434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:48.794451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:48.794467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:48.794488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:48.794506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:48.794522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x186e00 00:38:01.626 [2024-06-11 14:05:48.794539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:48.794556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:48.794572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.626 [2024-06-11 14:05:48.794581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.626 [2024-06-11 14:05:48.794588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.794932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x186e00 00:38:01.627 [2024-06-11 14:05:48.794949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x186e00 00:38:01.627 [2024-06-11 14:05:48.794965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x186e00 00:38:01.627 [2024-06-11 14:05:48.794982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.794991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x186e00 00:38:01.627 [2024-06-11 14:05:48.794998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.795007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x186e00 00:38:01.627 [2024-06-11 14:05:48.795014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.795028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x186e00 00:38:01.627 [2024-06-11 14:05:48.795034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.795044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x186e00 00:38:01.627 [2024-06-11 14:05:48.795051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.795060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x186e00 00:38:01.627 [2024-06-11 14:05:48.795067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.795076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.795085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.795094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.795101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.795110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.795117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.795126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.627 [2024-06-11 14:05:48.795134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.627 [2024-06-11 14:05:48.795143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x186e00 00:38:01.627 [2024-06-11 14:05:48.795150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.628 [2024-06-11 14:05:48.795556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.628 [2024-06-11 14:05:48.795716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x186e00 00:38:01.628 [2024-06-11 14:05:48.795723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186e00 00:38:01.629 [2024-06-11 14:05:48.795955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.795970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.795987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.795996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.629 [2024-06-11 14:05:48.796339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.629 [2024-06-11 14:05:48.796346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.630 [2024-06-11 14:05:48.796355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.630 [2024-06-11 14:05:48.796362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.630 [2024-06-11 14:05:48.796372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.630 [2024-06-11 14:05:48.796379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.630 [2024-06-11 14:05:48.796388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.630 [2024-06-11 14:05:48.796395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.630 [2024-06-11 14:05:48.796404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:01.630 [2024-06-11 14:05:48.796411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32765 cdw0:3eff200 sqhd:2b60 p:0 m:0 dnr:0 00:38:01.630 [2024-06-11 14:05:48.798812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:01.630 [2024-06-11 14:05:48.798823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:01.630 [2024-06-11 14:05:48.798831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5328 len:8 PRP1 0x0 PRP2 0x0 00:38:01.630 [2024-06-11 14:05:48.798838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:01.630 [2024-06-11 14:05:48.798871] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:38:01.630 [2024-06-11 14:05:48.798880] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:38:01.630 [2024-06-11 14:05:48.798887] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.630 [2024-06-11 14:05:48.802452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.630 [2024-06-11 14:05:48.821721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:38:01.630 [2024-06-11 14:05:48.880153] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:01.630 00:38:01.630 Latency(us) 00:38:01.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.630 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:01.630 Verification LBA range: start 0x0 length 0x4000 00:38:01.630 NVMe0n1 : 15.01 13129.11 51.29 306.35 0.00 9499.28 349.87 1041585.49 00:38:01.630 =================================================================================================================== 00:38:01.630 Total : 13129.11 51.29 306.35 0.00 9499.28 349.87 1041585.49 00:38:01.630 Received shutdown signal, test time was about 15.000000 seconds 00:38:01.630 00:38:01.630 Latency(us) 00:38:01.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.630 =================================================================================================================== 00:38:01.630 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2367497 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2367497 /var/tmp/bdevperf.sock 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2367497 ']' 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:01.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:01.630 14:05:54 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:38:02.201 14:05:55 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:02.201 14:05:55 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:38:02.201 14:05:55 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:38:02.462 [2024-06-11 14:05:55.170282] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:38:02.462 14:05:55 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:38:02.462 [2024-06-11 14:05:55.330760] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:38:02.462 14:05:55 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:02.722 NVMe0n1 00:38:02.722 14:05:55 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:02.983 00:38:02.983 14:05:55 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:03.243 00:38:03.243 14:05:56 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:03.243 14:05:56 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:38:03.504 14:05:56 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:03.504 14:05:56 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:38:06.806 14:05:59 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:06.806 14:05:59 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:38:06.806 14:05:59 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2368517 00:38:06.806 14:05:59 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 2368517 00:38:06.806 14:05:59 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:08.193 0 00:38:08.193 14:06:00 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:08.193 [2024-06-11 14:05:54.251844] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:38:08.193 [2024-06-11 14:05:54.251900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367497 ] 00:38:08.193 EAL: No free 2048 kB hugepages reported on node 1 00:38:08.193 [2024-06-11 14:05:54.311780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.193 [2024-06-11 14:05:54.375562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.193 [2024-06-11 14:05:56.376797] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:38:08.193 [2024-06-11 14:05:56.377507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:08.193 [2024-06-11 14:05:56.377548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:08.193 [2024-06-11 14:05:56.408193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:38:08.193 [2024-06-11 14:05:56.433460] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:08.193 Running I/O for 1 seconds... 00:38:08.193 00:38:08.193 Latency(us) 00:38:08.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:08.193 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:08.193 Verification LBA range: start 0x0 length 0x4000 00:38:08.193 NVMe0n1 : 1.00 16815.87 65.69 0.00 0.00 7565.72 1952.43 14964.05 00:38:08.193 =================================================================================================================== 00:38:08.193 Total : 16815.87 65.69 0.00 0.00 7565.72 1952.43 14964.05 00:38:08.193 14:06:00 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:08.193 14:06:00 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:38:08.193 14:06:00 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:08.193 14:06:01 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:08.193 14:06:01 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:38:08.454 14:06:01 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:08.715 14:06:01 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 2367497 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2367497 ']' 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2367497 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2367497 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2367497' 00:38:12.013 killing process with pid 2367497 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2367497 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2367497 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:12.013 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:38:12.013 rmmod nvme_rdma 00:38:12.274 rmmod nvme_fabrics 00:38:12.274 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:12.274 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:38:12.274 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:38:12.274 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2364033 ']' 00:38:12.274 14:06:04 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2364033 00:38:12.274 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2364033 ']' 00:38:12.274 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2364033 00:38:12.274 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:38:12.274 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:12.274 14:06:04 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2364033 00:38:12.274 14:06:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:12.274 14:06:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:12.274 14:06:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2364033' 00:38:12.274 killing process with pid 2364033 00:38:12.274 14:06:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2364033 00:38:12.275 14:06:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2364033 00:38:12.536 14:06:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:12.536 14:06:05 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:38:12.536 00:38:12.536 real 0m37.081s 00:38:12.536 user 2m1.713s 00:38:12.536 sys 0m7.017s 00:38:12.536 14:06:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:12.536 14:06:05 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:38:12.536 ************************************ 00:38:12.536 END TEST nvmf_failover 00:38:12.536 ************************************ 00:38:12.536 14:06:05 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:38:12.536 14:06:05 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:12.536 14:06:05 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:12.536 14:06:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:12.536 ************************************ 00:38:12.536 START TEST nvmf_host_discovery 00:38:12.536 ************************************ 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:38:12.536 * Looking for test storage... 00:38:12.536 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:38:12.536 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:38:12.536 00:38:12.536 real 0m0.131s 00:38:12.536 user 0m0.057s 00:38:12.536 sys 0m0.082s 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:12.536 14:06:05 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:38:12.536 ************************************ 00:38:12.536 END TEST nvmf_host_discovery 00:38:12.536 ************************************ 00:38:12.798 14:06:05 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:38:12.798 14:06:05 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:12.798 14:06:05 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:12.798 14:06:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:12.798 ************************************ 00:38:12.798 START TEST nvmf_host_multipath_status 00:38:12.798 ************************************ 00:38:12.798 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:38:12.798 * Looking for test storage... 00:38:12.798 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:38:12.799 14:06:05 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:38:20.947 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:38:20.947 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:38:20.947 Found net devices under 0000:98:00.0: mlx_0_0 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:38:20.947 Found net devices under 0000:98:00.1: mlx_0_1 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:38:20.947 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:38:20.948 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:20.948 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:38:20.948 altname enp152s0f0np0 00:38:20.948 altname ens817f0np0 00:38:20.948 inet 192.168.100.8/24 scope global mlx_0_0 00:38:20.948 valid_lft forever preferred_lft forever 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:38:20.948 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:20.948 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:38:20.948 altname enp152s0f1np1 00:38:20.948 altname ens817f1np1 00:38:20.948 inet 192.168.100.9/24 scope global mlx_0_1 00:38:20.948 valid_lft forever preferred_lft forever 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:38:20.948 192.168.100.9' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:38:20.948 192.168.100.9' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:38:20.948 192.168.100.9' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2374090 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2374090 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 2374090 ']' 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:20.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:20.948 14:06:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:20.948 [2024-06-11 14:06:12.923034] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:38:20.949 [2024-06-11 14:06:12.923101] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:20.949 EAL: No free 2048 kB hugepages reported on node 1 00:38:20.949 [2024-06-11 14:06:12.988847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:20.949 [2024-06-11 14:06:13.061894] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:20.949 [2024-06-11 14:06:13.061935] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:20.949 [2024-06-11 14:06:13.061942] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:20.949 [2024-06-11 14:06:13.061949] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:20.949 [2024-06-11 14:06:13.061954] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:20.949 [2024-06-11 14:06:13.062092] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:20.949 [2024-06-11 14:06:13.062242] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.949 14:06:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:20.949 14:06:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:38:20.949 14:06:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:20.949 14:06:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:20.949 14:06:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:20.949 14:06:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:20.949 14:06:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2374090 00:38:20.949 14:06:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:38:21.210 [2024-06-11 14:06:13.897139] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13b57b0/0x13b9ca0) succeed. 00:38:21.210 [2024-06-11 14:06:13.909267] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13b6cb0/0x13fb330) succeed. 00:38:21.210 14:06:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:38:21.471 Malloc0 00:38:21.471 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:38:21.471 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:21.733 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:38:21.733 [2024-06-11 14:06:14.594123] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:38:21.733 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:38:21.994 [2024-06-11 14:06:14.734196] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:38:21.994 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2374449 00:38:21.994 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:21.994 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:38:21.994 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2374449 /var/tmp/bdevperf.sock 00:38:21.994 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 2374449 ']' 00:38:21.995 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:21.995 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:21.995 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:21.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:21.995 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:21.995 14:06:14 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:22.937 14:06:15 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:22.937 14:06:15 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:38:22.937 14:06:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:38:22.937 14:06:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:38:23.197 Nvme0n1 00:38:23.197 14:06:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:38:23.457 Nvme0n1 00:38:23.457 14:06:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:38:23.457 14:06:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:38:25.377 14:06:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:38:25.377 14:06:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:38:25.676 14:06:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:38:25.676 14:06:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:38:26.663 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:38:26.663 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:26.923 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:26.923 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:26.923 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:26.923 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:26.923 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:26.923 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:27.183 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:27.183 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:27.183 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:27.183 14:06:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:27.183 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:27.183 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:27.183 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:27.183 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:27.442 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:27.442 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:27.442 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:27.442 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:27.701 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:27.701 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:27.701 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:27.701 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:27.701 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:27.701 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:38:27.701 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:38:27.960 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:38:28.219 14:06:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:38:29.160 14:06:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:38:29.160 14:06:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:29.160 14:06:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:29.160 14:06:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:29.423 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:29.423 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:29.423 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:29.423 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:29.423 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:29.423 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:29.423 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:29.423 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:29.684 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:29.684 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:29.684 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:29.685 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:29.685 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:29.685 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:29.685 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:29.685 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:29.945 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:29.945 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:29.945 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:29.945 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:30.205 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:30.205 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:38:30.205 14:06:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:38:30.205 14:06:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:38:30.465 14:06:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:38:31.404 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:38:31.404 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:31.404 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:31.405 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:31.665 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:31.665 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:31.665 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:31.665 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:31.925 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:31.925 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:31.925 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:31.925 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:31.925 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:31.925 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:31.925 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:31.925 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:32.186 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:32.186 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:32.186 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:32.186 14:06:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:32.186 14:06:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:32.186 14:06:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:32.186 14:06:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:32.186 14:06:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:32.446 14:06:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:32.446 14:06:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:38:32.446 14:06:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:38:32.706 14:06:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:38:32.706 14:06:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:38:33.646 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:38:33.646 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:33.646 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:33.646 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:33.907 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:33.907 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:33.907 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:33.907 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:34.167 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:34.167 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:34.167 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:34.167 14:06:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:34.167 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:34.167 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:34.167 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:34.167 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:34.427 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:34.427 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:34.427 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:34.427 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:34.688 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:34.688 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:34.688 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:34.688 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:34.948 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:34.948 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:38:34.948 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:38:34.948 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:38:35.208 14:06:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:38:36.149 14:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:38:36.149 14:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:36.149 14:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:36.149 14:06:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:36.409 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:36.409 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:36.409 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:36.409 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:36.409 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:36.409 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:36.410 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:36.410 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:36.670 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:36.670 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:36.670 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:36.670 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:36.930 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:36.930 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:38:36.930 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:36.930 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:36.930 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:36.930 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:36.930 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:36.930 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:37.191 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:37.191 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:38:37.191 14:06:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:38:37.451 14:06:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:38:37.451 14:06:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:38.834 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:39.095 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:39.095 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:39.095 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:39.095 14:06:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:39.095 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:39.095 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:38:39.356 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:39.356 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:39.356 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:39.356 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:39.356 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:39.356 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:39.618 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:39.618 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:38:39.618 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:38:39.618 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:38:39.879 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:38:40.140 14:06:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:38:41.083 14:06:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:38:41.083 14:06:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:41.083 14:06:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:41.083 14:06:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:41.343 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:41.343 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:41.343 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:41.343 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:41.343 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:41.343 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:41.343 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:41.343 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:41.604 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:41.604 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:41.604 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:41.604 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:41.605 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:41.605 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:41.605 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:41.605 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:41.866 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:41.866 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:41.866 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:41.866 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:42.127 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:42.127 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:38:42.127 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:38:42.127 14:06:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:38:42.388 14:06:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:38:43.332 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:38:43.332 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:43.332 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:43.332 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:43.593 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:43.593 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:43.593 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:43.593 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:43.593 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:43.593 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:43.593 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:43.593 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:43.853 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:43.853 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:43.853 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:43.853 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:44.113 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:44.113 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:44.113 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:44.113 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:44.113 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:44.113 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:44.113 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:44.113 14:06:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:44.373 14:06:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:44.373 14:06:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:38:44.373 14:06:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:38:44.634 14:06:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:38:44.634 14:06:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:38:45.610 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:38:45.610 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:45.610 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:45.610 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:45.871 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:45.871 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:45.871 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:45.871 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:46.132 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:46.132 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:46.132 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:46.132 14:06:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:46.132 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:46.132 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:46.132 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:46.132 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:46.392 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:46.392 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:46.392 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:46.392 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:46.653 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:46.653 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:46.653 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:46.653 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:46.653 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:46.653 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:38:46.653 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:38:46.914 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:38:46.914 14:06:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:38:48.297 14:06:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:38:48.297 14:06:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:48.297 14:06:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:48.297 14:06:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:48.297 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:48.297 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:48.297 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:48.297 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:48.297 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:48.297 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:48.297 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:48.297 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:48.557 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:48.557 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:48.557 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:48.557 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:48.882 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:48.882 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:48.882 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:48.882 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:48.882 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:48.882 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:48.882 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:48.882 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2374449 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 2374449 ']' 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 2374449 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2374449 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2374449' 00:38:49.142 killing process with pid 2374449 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 2374449 00:38:49.142 14:06:41 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 2374449 00:38:49.142 Connection closed with partial response: 00:38:49.142 00:38:49.142 00:38:49.405 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2374449 00:38:49.405 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:49.405 [2024-06-11 14:06:14.793596] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:38:49.405 [2024-06-11 14:06:14.793651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374449 ] 00:38:49.405 EAL: No free 2048 kB hugepages reported on node 1 00:38:49.405 [2024-06-11 14:06:14.844360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.405 [2024-06-11 14:06:14.896092] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:38:49.405 Running I/O for 90 seconds... 00:38:49.405 [2024-06-11 14:06:27.768479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x186e00 00:38:49.405 [2024-06-11 14:06:27.768511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:38:49.405 [2024-06-11 14:06:27.768542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x186e00 00:38:49.405 [2024-06-11 14:06:27.768549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:38:49.405 [2024-06-11 14:06:27.768558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x186e00 00:38:49.405 [2024-06-11 14:06:27.768564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:38:49.405 [2024-06-11 14:06:27.768571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x186e00 00:38:49.405 [2024-06-11 14:06:27.768576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:38:49.405 [2024-06-11 14:06:27.768583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x186e00 00:38:49.405 [2024-06-11 14:06:27.768588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:38:49.405 [2024-06-11 14:06:27.768596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x186e00 00:38:49.405 [2024-06-11 14:06:27.768601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:38:49.405 [2024-06-11 14:06:27.768609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x186e00 00:38:49.405 [2024-06-11 14:06:27.768613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.768954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.768959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:49.406 [2024-06-11 14:06:27.769509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x186e00 00:38:49.406 [2024-06-11 14:06:27.769515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x186e00 00:38:49.407 [2024-06-11 14:06:27.769529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x186e00 00:38:49.407 [2024-06-11 14:06:27.769544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x186e00 00:38:49.407 [2024-06-11 14:06:27.769558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x186e00 00:38:49.407 [2024-06-11 14:06:27.769572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x186e00 00:38:49.407 [2024-06-11 14:06:27.769586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x186e00 00:38:49.407 [2024-06-11 14:06:27.769601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x186e00 00:38:49.407 [2024-06-11 14:06:27.769615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.769629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.769643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.769657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.769671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.769901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.769918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.769933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.769950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.769965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.769981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.769992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.769997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:49.407 [2024-06-11 14:06:27.770851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.407 [2024-06-11 14:06:27.770856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.770869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:27.770874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.770887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:27.770892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.770906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:27.770911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.770923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:27.770928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.770941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:27.770947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.770959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:27.770964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.770977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:27.770984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.770997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:27.771002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:27.771215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:27.771220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.792467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.792502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.792529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.792536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.792911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.792918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.792927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.792932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.792940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.792945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.792952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.792957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.792965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.792970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.792977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.792982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.792990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.792994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.793007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.793078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.793090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.793151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.793163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.793201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x186e00 00:38:49.408 [2024-06-11 14:06:39.793225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:49.408 [2024-06-11 14:06:39.793279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.408 [2024-06-11 14:06:39.793284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x186e00 00:38:49.409 [2024-06-11 14:06:39.793832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:49.409 [2024-06-11 14:06:39.793852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:49.409 [2024-06-11 14:06:39.793857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:49.409 Received shutdown signal, test time was about 25.573460 seconds 00:38:49.409 00:38:49.409 Latency(us) 00:38:49.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:49.409 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:49.409 Verification LBA range: start 0x0 length 0x4000 00:38:49.409 Nvme0n1 : 25.57 15673.84 61.23 0.00 0.00 8148.18 84.91 3019898.88 00:38:49.409 =================================================================================================================== 00:38:49.409 Total : 15673.84 61.23 0.00 0.00 8148.18 84.91 3019898.88 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:38:49.409 rmmod nvme_rdma 00:38:49.409 rmmod nvme_fabrics 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2374090 ']' 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2374090 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 2374090 ']' 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 2374090 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:49.409 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2374090 00:38:49.670 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:49.670 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:49.670 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2374090' 00:38:49.670 killing process with pid 2374090 00:38:49.670 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 2374090 00:38:49.670 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 2374090 00:38:49.670 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:49.670 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:38:49.670 00:38:49.670 real 0m37.051s 00:38:49.670 user 1m42.648s 00:38:49.670 sys 0m8.641s 00:38:49.670 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:49.670 14:06:42 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:49.670 ************************************ 00:38:49.670 END TEST nvmf_host_multipath_status 00:38:49.670 ************************************ 00:38:49.670 14:06:42 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:38:49.670 14:06:42 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:49.670 14:06:42 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:49.670 14:06:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:49.932 ************************************ 00:38:49.932 START TEST nvmf_discovery_remove_ifc 00:38:49.932 ************************************ 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:38:49.932 * Looking for test storage... 00:38:49.932 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:38:49.932 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:38:49.932 00:38:49.932 real 0m0.131s 00:38:49.932 user 0m0.058s 00:38:49.932 sys 0m0.081s 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:49.932 14:06:42 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:49.932 ************************************ 00:38:49.932 END TEST nvmf_discovery_remove_ifc 00:38:49.932 ************************************ 00:38:49.932 14:06:42 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:38:49.932 14:06:42 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:49.932 14:06:42 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:49.932 14:06:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:38:49.932 ************************************ 00:38:49.932 START TEST nvmf_identify_kernel_target 00:38:49.932 ************************************ 00:38:49.932 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:38:50.193 * Looking for test storage... 00:38:50.193 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:38:50.193 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:38:50.194 14:06:42 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:58.339 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:38:58.340 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:38:58.340 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:38:58.340 Found net devices under 0000:98:00.0: mlx_0_0 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:38:58.340 Found net devices under 0000:98:00.1: mlx_0_1 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:38:58.340 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:58.340 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:38:58.340 altname enp152s0f0np0 00:38:58.340 altname ens817f0np0 00:38:58.340 inet 192.168.100.8/24 scope global mlx_0_0 00:38:58.340 valid_lft forever preferred_lft forever 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:38:58.340 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:38:58.340 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:38:58.340 altname enp152s0f1np1 00:38:58.340 altname ens817f1np1 00:38:58.340 inet 192.168.100.9/24 scope global mlx_0_1 00:38:58.340 valid_lft forever preferred_lft forever 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:38:58.340 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:38:58.341 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:38:58.341 14:06:49 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:38:58.341 192.168.100.9' 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:38:58.341 192.168.100.9' 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:38:58.341 192.168.100.9' 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:58.341 14:06:50 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:39:00.885 Waiting for block devices as requested 00:39:00.885 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:00.885 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:00.885 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:00.885 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:00.885 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:01.146 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:01.146 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:01.146 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:01.406 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:01.406 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:01.406 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:01.666 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:01.666 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:01.666 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:01.926 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:01.926 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:01.926 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:01.926 No valid GPT data, bailing 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:01.926 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -t rdma -s 4420 00:39:02.187 00:39:02.187 Discovery Log Number of Records 2, Generation counter 2 00:39:02.187 =====Discovery Log Entry 0====== 00:39:02.187 trtype: rdma 00:39:02.187 adrfam: ipv4 00:39:02.187 subtype: current discovery subsystem 00:39:02.187 treq: not specified, sq flow control disable supported 00:39:02.187 portid: 1 00:39:02.187 trsvcid: 4420 00:39:02.187 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:02.187 traddr: 192.168.100.8 00:39:02.187 eflags: none 00:39:02.187 rdma_prtype: not specified 00:39:02.187 rdma_qptype: connected 00:39:02.187 rdma_cms: rdma-cm 00:39:02.187 rdma_pkey: 0x0000 00:39:02.187 =====Discovery Log Entry 1====== 00:39:02.187 trtype: rdma 00:39:02.187 adrfam: ipv4 00:39:02.187 subtype: nvme subsystem 00:39:02.187 treq: not specified, sq flow control disable supported 00:39:02.187 portid: 1 00:39:02.187 trsvcid: 4420 00:39:02.187 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:02.187 traddr: 192.168.100.8 00:39:02.187 eflags: none 00:39:02.187 rdma_prtype: not specified 00:39:02.187 rdma_qptype: connected 00:39:02.187 rdma_cms: rdma-cm 00:39:02.187 rdma_pkey: 0x0000 00:39:02.187 14:06:54 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:39:02.187 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:39:02.187 EAL: No free 2048 kB hugepages reported on node 1 00:39:02.187 ===================================================== 00:39:02.187 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:39:02.187 ===================================================== 00:39:02.187 Controller Capabilities/Features 00:39:02.187 ================================ 00:39:02.187 Vendor ID: 0000 00:39:02.187 Subsystem Vendor ID: 0000 00:39:02.187 Serial Number: 8492d4ef3daf3db87bb3 00:39:02.187 Model Number: Linux 00:39:02.187 Firmware Version: 6.7.0-68 00:39:02.187 Recommended Arb Burst: 0 00:39:02.187 IEEE OUI Identifier: 00 00 00 00:39:02.187 Multi-path I/O 00:39:02.187 May have multiple subsystem ports: No 00:39:02.187 May have multiple controllers: No 00:39:02.187 Associated with SR-IOV VF: No 00:39:02.187 Max Data Transfer Size: Unlimited 00:39:02.187 Max Number of Namespaces: 0 00:39:02.187 Max Number of I/O Queues: 1024 00:39:02.187 NVMe Specification Version (VS): 1.3 00:39:02.187 NVMe Specification Version (Identify): 1.3 00:39:02.187 Maximum Queue Entries: 128 00:39:02.187 Contiguous Queues Required: No 00:39:02.187 Arbitration Mechanisms Supported 00:39:02.187 Weighted Round Robin: Not Supported 00:39:02.187 Vendor Specific: Not Supported 00:39:02.187 Reset Timeout: 7500 ms 00:39:02.187 Doorbell Stride: 4 bytes 00:39:02.187 NVM Subsystem Reset: Not Supported 00:39:02.187 Command Sets Supported 00:39:02.187 NVM Command Set: Supported 00:39:02.187 Boot Partition: Not Supported 00:39:02.187 Memory Page Size Minimum: 4096 bytes 00:39:02.187 Memory Page Size Maximum: 4096 bytes 00:39:02.187 Persistent Memory Region: Not Supported 00:39:02.187 Optional Asynchronous Events Supported 00:39:02.187 Namespace Attribute Notices: Not Supported 00:39:02.187 Firmware Activation Notices: Not Supported 00:39:02.187 ANA Change Notices: Not Supported 00:39:02.187 PLE Aggregate Log Change Notices: Not Supported 00:39:02.187 LBA Status Info Alert Notices: Not Supported 00:39:02.187 EGE Aggregate Log Change Notices: Not Supported 00:39:02.187 Normal NVM Subsystem Shutdown event: Not Supported 00:39:02.187 Zone Descriptor Change Notices: Not Supported 00:39:02.187 Discovery Log Change Notices: Supported 00:39:02.187 Controller Attributes 00:39:02.187 128-bit Host Identifier: Not Supported 00:39:02.187 Non-Operational Permissive Mode: Not Supported 00:39:02.187 NVM Sets: Not Supported 00:39:02.187 Read Recovery Levels: Not Supported 00:39:02.187 Endurance Groups: Not Supported 00:39:02.187 Predictable Latency Mode: Not Supported 00:39:02.187 Traffic Based Keep ALive: Not Supported 00:39:02.187 Namespace Granularity: Not Supported 00:39:02.187 SQ Associations: Not Supported 00:39:02.187 UUID List: Not Supported 00:39:02.187 Multi-Domain Subsystem: Not Supported 00:39:02.187 Fixed Capacity Management: Not Supported 00:39:02.187 Variable Capacity Management: Not Supported 00:39:02.187 Delete Endurance Group: Not Supported 00:39:02.187 Delete NVM Set: Not Supported 00:39:02.187 Extended LBA Formats Supported: Not Supported 00:39:02.187 Flexible Data Placement Supported: Not Supported 00:39:02.187 00:39:02.187 Controller Memory Buffer Support 00:39:02.187 ================================ 00:39:02.187 Supported: No 00:39:02.187 00:39:02.187 Persistent Memory Region Support 00:39:02.187 ================================ 00:39:02.187 Supported: No 00:39:02.187 00:39:02.187 Admin Command Set Attributes 00:39:02.187 ============================ 00:39:02.187 Security Send/Receive: Not Supported 00:39:02.187 Format NVM: Not Supported 00:39:02.188 Firmware Activate/Download: Not Supported 00:39:02.188 Namespace Management: Not Supported 00:39:02.188 Device Self-Test: Not Supported 00:39:02.188 Directives: Not Supported 00:39:02.188 NVMe-MI: Not Supported 00:39:02.188 Virtualization Management: Not Supported 00:39:02.188 Doorbell Buffer Config: Not Supported 00:39:02.188 Get LBA Status Capability: Not Supported 00:39:02.188 Command & Feature Lockdown Capability: Not Supported 00:39:02.188 Abort Command Limit: 1 00:39:02.188 Async Event Request Limit: 1 00:39:02.188 Number of Firmware Slots: N/A 00:39:02.188 Firmware Slot 1 Read-Only: N/A 00:39:02.188 Firmware Activation Without Reset: N/A 00:39:02.188 Multiple Update Detection Support: N/A 00:39:02.188 Firmware Update Granularity: No Information Provided 00:39:02.188 Per-Namespace SMART Log: No 00:39:02.188 Asymmetric Namespace Access Log Page: Not Supported 00:39:02.188 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:39:02.188 Command Effects Log Page: Not Supported 00:39:02.188 Get Log Page Extended Data: Supported 00:39:02.188 Telemetry Log Pages: Not Supported 00:39:02.188 Persistent Event Log Pages: Not Supported 00:39:02.188 Supported Log Pages Log Page: May Support 00:39:02.188 Commands Supported & Effects Log Page: Not Supported 00:39:02.188 Feature Identifiers & Effects Log Page:May Support 00:39:02.188 NVMe-MI Commands & Effects Log Page: May Support 00:39:02.188 Data Area 4 for Telemetry Log: Not Supported 00:39:02.188 Error Log Page Entries Supported: 1 00:39:02.188 Keep Alive: Not Supported 00:39:02.188 00:39:02.188 NVM Command Set Attributes 00:39:02.188 ========================== 00:39:02.188 Submission Queue Entry Size 00:39:02.188 Max: 1 00:39:02.188 Min: 1 00:39:02.188 Completion Queue Entry Size 00:39:02.188 Max: 1 00:39:02.188 Min: 1 00:39:02.188 Number of Namespaces: 0 00:39:02.188 Compare Command: Not Supported 00:39:02.188 Write Uncorrectable Command: Not Supported 00:39:02.188 Dataset Management Command: Not Supported 00:39:02.188 Write Zeroes Command: Not Supported 00:39:02.188 Set Features Save Field: Not Supported 00:39:02.188 Reservations: Not Supported 00:39:02.188 Timestamp: Not Supported 00:39:02.188 Copy: Not Supported 00:39:02.188 Volatile Write Cache: Not Present 00:39:02.188 Atomic Write Unit (Normal): 1 00:39:02.188 Atomic Write Unit (PFail): 1 00:39:02.188 Atomic Compare & Write Unit: 1 00:39:02.188 Fused Compare & Write: Not Supported 00:39:02.188 Scatter-Gather List 00:39:02.188 SGL Command Set: Supported 00:39:02.188 SGL Keyed: Supported 00:39:02.188 SGL Bit Bucket Descriptor: Not Supported 00:39:02.188 SGL Metadata Pointer: Not Supported 00:39:02.188 Oversized SGL: Not Supported 00:39:02.188 SGL Metadata Address: Not Supported 00:39:02.188 SGL Offset: Supported 00:39:02.188 Transport SGL Data Block: Not Supported 00:39:02.188 Replay Protected Memory Block: Not Supported 00:39:02.188 00:39:02.188 Firmware Slot Information 00:39:02.188 ========================= 00:39:02.188 Active slot: 0 00:39:02.188 00:39:02.188 00:39:02.188 Error Log 00:39:02.188 ========= 00:39:02.188 00:39:02.188 Active Namespaces 00:39:02.188 ================= 00:39:02.188 Discovery Log Page 00:39:02.188 ================== 00:39:02.188 Generation Counter: 2 00:39:02.188 Number of Records: 2 00:39:02.188 Record Format: 0 00:39:02.188 00:39:02.188 Discovery Log Entry 0 00:39:02.188 ---------------------- 00:39:02.188 Transport Type: 1 (RDMA) 00:39:02.188 Address Family: 1 (IPv4) 00:39:02.188 Subsystem Type: 3 (Current Discovery Subsystem) 00:39:02.188 Entry Flags: 00:39:02.188 Duplicate Returned Information: 0 00:39:02.188 Explicit Persistent Connection Support for Discovery: 0 00:39:02.188 Transport Requirements: 00:39:02.188 Secure Channel: Not Specified 00:39:02.188 Port ID: 1 (0x0001) 00:39:02.188 Controller ID: 65535 (0xffff) 00:39:02.188 Admin Max SQ Size: 32 00:39:02.188 Transport Service Identifier: 4420 00:39:02.188 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:39:02.188 Transport Address: 192.168.100.8 00:39:02.188 Transport Specific Address Subtype - RDMA 00:39:02.188 RDMA QP Service Type: 1 (Reliable Connected) 00:39:02.188 RDMA Provider Type: 1 (No provider specified) 00:39:02.188 RDMA CM Service: 1 (RDMA_CM) 00:39:02.188 Discovery Log Entry 1 00:39:02.188 ---------------------- 00:39:02.188 Transport Type: 1 (RDMA) 00:39:02.188 Address Family: 1 (IPv4) 00:39:02.188 Subsystem Type: 2 (NVM Subsystem) 00:39:02.188 Entry Flags: 00:39:02.188 Duplicate Returned Information: 0 00:39:02.188 Explicit Persistent Connection Support for Discovery: 0 00:39:02.188 Transport Requirements: 00:39:02.188 Secure Channel: Not Specified 00:39:02.188 Port ID: 1 (0x0001) 00:39:02.188 Controller ID: 65535 (0xffff) 00:39:02.188 Admin Max SQ Size: 32 00:39:02.188 Transport Service Identifier: 4420 00:39:02.188 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:39:02.188 Transport Address: 192.168.100.8 00:39:02.188 Transport Specific Address Subtype - RDMA 00:39:02.188 RDMA QP Service Type: 1 (Reliable Connected) 00:39:02.449 RDMA Provider Type: 1 (No provider specified) 00:39:02.449 RDMA CM Service: 1 (RDMA_CM) 00:39:02.449 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:02.449 EAL: No free 2048 kB hugepages reported on node 1 00:39:02.449 get_feature(0x01) failed 00:39:02.449 get_feature(0x02) failed 00:39:02.449 get_feature(0x04) failed 00:39:02.449 ===================================================== 00:39:02.449 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:39:02.449 ===================================================== 00:39:02.449 Controller Capabilities/Features 00:39:02.449 ================================ 00:39:02.449 Vendor ID: 0000 00:39:02.449 Subsystem Vendor ID: 0000 00:39:02.449 Serial Number: 86e5bf78a6c2dc3afc5a 00:39:02.449 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:39:02.449 Firmware Version: 6.7.0-68 00:39:02.449 Recommended Arb Burst: 6 00:39:02.449 IEEE OUI Identifier: 00 00 00 00:39:02.449 Multi-path I/O 00:39:02.449 May have multiple subsystem ports: Yes 00:39:02.449 May have multiple controllers: Yes 00:39:02.449 Associated with SR-IOV VF: No 00:39:02.449 Max Data Transfer Size: 1048576 00:39:02.449 Max Number of Namespaces: 1024 00:39:02.449 Max Number of I/O Queues: 128 00:39:02.449 NVMe Specification Version (VS): 1.3 00:39:02.449 NVMe Specification Version (Identify): 1.3 00:39:02.449 Maximum Queue Entries: 128 00:39:02.449 Contiguous Queues Required: No 00:39:02.449 Arbitration Mechanisms Supported 00:39:02.449 Weighted Round Robin: Not Supported 00:39:02.449 Vendor Specific: Not Supported 00:39:02.449 Reset Timeout: 7500 ms 00:39:02.449 Doorbell Stride: 4 bytes 00:39:02.449 NVM Subsystem Reset: Not Supported 00:39:02.449 Command Sets Supported 00:39:02.449 NVM Command Set: Supported 00:39:02.449 Boot Partition: Not Supported 00:39:02.449 Memory Page Size Minimum: 4096 bytes 00:39:02.449 Memory Page Size Maximum: 4096 bytes 00:39:02.449 Persistent Memory Region: Not Supported 00:39:02.449 Optional Asynchronous Events Supported 00:39:02.449 Namespace Attribute Notices: Supported 00:39:02.449 Firmware Activation Notices: Not Supported 00:39:02.449 ANA Change Notices: Supported 00:39:02.449 PLE Aggregate Log Change Notices: Not Supported 00:39:02.449 LBA Status Info Alert Notices: Not Supported 00:39:02.449 EGE Aggregate Log Change Notices: Not Supported 00:39:02.449 Normal NVM Subsystem Shutdown event: Not Supported 00:39:02.449 Zone Descriptor Change Notices: Not Supported 00:39:02.449 Discovery Log Change Notices: Not Supported 00:39:02.449 Controller Attributes 00:39:02.449 128-bit Host Identifier: Supported 00:39:02.449 Non-Operational Permissive Mode: Not Supported 00:39:02.449 NVM Sets: Not Supported 00:39:02.449 Read Recovery Levels: Not Supported 00:39:02.449 Endurance Groups: Not Supported 00:39:02.449 Predictable Latency Mode: Not Supported 00:39:02.449 Traffic Based Keep ALive: Supported 00:39:02.449 Namespace Granularity: Not Supported 00:39:02.449 SQ Associations: Not Supported 00:39:02.449 UUID List: Not Supported 00:39:02.449 Multi-Domain Subsystem: Not Supported 00:39:02.449 Fixed Capacity Management: Not Supported 00:39:02.449 Variable Capacity Management: Not Supported 00:39:02.449 Delete Endurance Group: Not Supported 00:39:02.449 Delete NVM Set: Not Supported 00:39:02.449 Extended LBA Formats Supported: Not Supported 00:39:02.449 Flexible Data Placement Supported: Not Supported 00:39:02.449 00:39:02.450 Controller Memory Buffer Support 00:39:02.450 ================================ 00:39:02.450 Supported: No 00:39:02.450 00:39:02.450 Persistent Memory Region Support 00:39:02.450 ================================ 00:39:02.450 Supported: No 00:39:02.450 00:39:02.450 Admin Command Set Attributes 00:39:02.450 ============================ 00:39:02.450 Security Send/Receive: Not Supported 00:39:02.450 Format NVM: Not Supported 00:39:02.450 Firmware Activate/Download: Not Supported 00:39:02.450 Namespace Management: Not Supported 00:39:02.450 Device Self-Test: Not Supported 00:39:02.450 Directives: Not Supported 00:39:02.450 NVMe-MI: Not Supported 00:39:02.450 Virtualization Management: Not Supported 00:39:02.450 Doorbell Buffer Config: Not Supported 00:39:02.450 Get LBA Status Capability: Not Supported 00:39:02.450 Command & Feature Lockdown Capability: Not Supported 00:39:02.450 Abort Command Limit: 4 00:39:02.450 Async Event Request Limit: 4 00:39:02.450 Number of Firmware Slots: N/A 00:39:02.450 Firmware Slot 1 Read-Only: N/A 00:39:02.450 Firmware Activation Without Reset: N/A 00:39:02.450 Multiple Update Detection Support: N/A 00:39:02.450 Firmware Update Granularity: No Information Provided 00:39:02.450 Per-Namespace SMART Log: Yes 00:39:02.450 Asymmetric Namespace Access Log Page: Supported 00:39:02.450 ANA Transition Time : 10 sec 00:39:02.450 00:39:02.450 Asymmetric Namespace Access Capabilities 00:39:02.450 ANA Optimized State : Supported 00:39:02.450 ANA Non-Optimized State : Supported 00:39:02.450 ANA Inaccessible State : Supported 00:39:02.450 ANA Persistent Loss State : Supported 00:39:02.450 ANA Change State : Supported 00:39:02.450 ANAGRPID is not changed : No 00:39:02.450 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:39:02.450 00:39:02.450 ANA Group Identifier Maximum : 128 00:39:02.450 Number of ANA Group Identifiers : 128 00:39:02.450 Max Number of Allowed Namespaces : 1024 00:39:02.450 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:39:02.450 Command Effects Log Page: Supported 00:39:02.450 Get Log Page Extended Data: Supported 00:39:02.450 Telemetry Log Pages: Not Supported 00:39:02.450 Persistent Event Log Pages: Not Supported 00:39:02.450 Supported Log Pages Log Page: May Support 00:39:02.450 Commands Supported & Effects Log Page: Not Supported 00:39:02.450 Feature Identifiers & Effects Log Page:May Support 00:39:02.450 NVMe-MI Commands & Effects Log Page: May Support 00:39:02.450 Data Area 4 for Telemetry Log: Not Supported 00:39:02.450 Error Log Page Entries Supported: 128 00:39:02.450 Keep Alive: Supported 00:39:02.450 Keep Alive Granularity: 1000 ms 00:39:02.450 00:39:02.450 NVM Command Set Attributes 00:39:02.450 ========================== 00:39:02.450 Submission Queue Entry Size 00:39:02.450 Max: 64 00:39:02.450 Min: 64 00:39:02.450 Completion Queue Entry Size 00:39:02.450 Max: 16 00:39:02.450 Min: 16 00:39:02.450 Number of Namespaces: 1024 00:39:02.450 Compare Command: Not Supported 00:39:02.450 Write Uncorrectable Command: Not Supported 00:39:02.450 Dataset Management Command: Supported 00:39:02.450 Write Zeroes Command: Supported 00:39:02.450 Set Features Save Field: Not Supported 00:39:02.450 Reservations: Not Supported 00:39:02.450 Timestamp: Not Supported 00:39:02.450 Copy: Not Supported 00:39:02.450 Volatile Write Cache: Present 00:39:02.450 Atomic Write Unit (Normal): 1 00:39:02.450 Atomic Write Unit (PFail): 1 00:39:02.450 Atomic Compare & Write Unit: 1 00:39:02.450 Fused Compare & Write: Not Supported 00:39:02.450 Scatter-Gather List 00:39:02.450 SGL Command Set: Supported 00:39:02.450 SGL Keyed: Supported 00:39:02.450 SGL Bit Bucket Descriptor: Not Supported 00:39:02.450 SGL Metadata Pointer: Not Supported 00:39:02.450 Oversized SGL: Not Supported 00:39:02.450 SGL Metadata Address: Not Supported 00:39:02.450 SGL Offset: Supported 00:39:02.450 Transport SGL Data Block: Not Supported 00:39:02.450 Replay Protected Memory Block: Not Supported 00:39:02.450 00:39:02.450 Firmware Slot Information 00:39:02.450 ========================= 00:39:02.450 Active slot: 0 00:39:02.450 00:39:02.450 Asymmetric Namespace Access 00:39:02.450 =========================== 00:39:02.450 Change Count : 0 00:39:02.450 Number of ANA Group Descriptors : 1 00:39:02.450 ANA Group Descriptor : 0 00:39:02.450 ANA Group ID : 1 00:39:02.450 Number of NSID Values : 1 00:39:02.450 Change Count : 0 00:39:02.450 ANA State : 1 00:39:02.450 Namespace Identifier : 1 00:39:02.450 00:39:02.450 Commands Supported and Effects 00:39:02.450 ============================== 00:39:02.450 Admin Commands 00:39:02.450 -------------- 00:39:02.450 Get Log Page (02h): Supported 00:39:02.450 Identify (06h): Supported 00:39:02.450 Abort (08h): Supported 00:39:02.450 Set Features (09h): Supported 00:39:02.450 Get Features (0Ah): Supported 00:39:02.450 Asynchronous Event Request (0Ch): Supported 00:39:02.450 Keep Alive (18h): Supported 00:39:02.450 I/O Commands 00:39:02.450 ------------ 00:39:02.450 Flush (00h): Supported 00:39:02.450 Write (01h): Supported LBA-Change 00:39:02.450 Read (02h): Supported 00:39:02.450 Write Zeroes (08h): Supported LBA-Change 00:39:02.450 Dataset Management (09h): Supported 00:39:02.450 00:39:02.450 Error Log 00:39:02.450 ========= 00:39:02.450 Entry: 0 00:39:02.450 Error Count: 0x3 00:39:02.450 Submission Queue Id: 0x0 00:39:02.450 Command Id: 0x5 00:39:02.450 Phase Bit: 0 00:39:02.450 Status Code: 0x2 00:39:02.450 Status Code Type: 0x0 00:39:02.450 Do Not Retry: 1 00:39:02.450 Error Location: 0x28 00:39:02.450 LBA: 0x0 00:39:02.450 Namespace: 0x0 00:39:02.450 Vendor Log Page: 0x0 00:39:02.450 ----------- 00:39:02.450 Entry: 1 00:39:02.450 Error Count: 0x2 00:39:02.450 Submission Queue Id: 0x0 00:39:02.450 Command Id: 0x5 00:39:02.450 Phase Bit: 0 00:39:02.450 Status Code: 0x2 00:39:02.450 Status Code Type: 0x0 00:39:02.450 Do Not Retry: 1 00:39:02.450 Error Location: 0x28 00:39:02.450 LBA: 0x0 00:39:02.450 Namespace: 0x0 00:39:02.450 Vendor Log Page: 0x0 00:39:02.450 ----------- 00:39:02.450 Entry: 2 00:39:02.450 Error Count: 0x1 00:39:02.450 Submission Queue Id: 0x0 00:39:02.450 Command Id: 0x0 00:39:02.450 Phase Bit: 0 00:39:02.450 Status Code: 0x2 00:39:02.450 Status Code Type: 0x0 00:39:02.450 Do Not Retry: 1 00:39:02.450 Error Location: 0x28 00:39:02.450 LBA: 0x0 00:39:02.450 Namespace: 0x0 00:39:02.450 Vendor Log Page: 0x0 00:39:02.450 00:39:02.450 Number of Queues 00:39:02.450 ================ 00:39:02.450 Number of I/O Submission Queues: 128 00:39:02.450 Number of I/O Completion Queues: 128 00:39:02.450 00:39:02.450 ZNS Specific Controller Data 00:39:02.450 ============================ 00:39:02.450 Zone Append Size Limit: 0 00:39:02.450 00:39:02.450 00:39:02.450 Active Namespaces 00:39:02.450 ================= 00:39:02.450 get_feature(0x05) failed 00:39:02.450 Namespace ID:1 00:39:02.450 Command Set Identifier: NVM (00h) 00:39:02.450 Deallocate: Supported 00:39:02.450 Deallocated/Unwritten Error: Not Supported 00:39:02.450 Deallocated Read Value: Unknown 00:39:02.450 Deallocate in Write Zeroes: Not Supported 00:39:02.450 Deallocated Guard Field: 0xFFFF 00:39:02.450 Flush: Supported 00:39:02.450 Reservation: Not Supported 00:39:02.450 Namespace Sharing Capabilities: Multiple Controllers 00:39:02.450 Size (in LBAs): 3750748848 (1788GiB) 00:39:02.450 Capacity (in LBAs): 3750748848 (1788GiB) 00:39:02.450 Utilization (in LBAs): 3750748848 (1788GiB) 00:39:02.450 UUID: 225159a0-c7bc-47a9-abb0-c96dcfa1ce36 00:39:02.450 Thin Provisioning: Not Supported 00:39:02.450 Per-NS Atomic Units: Yes 00:39:02.450 Atomic Write Unit (Normal): 8 00:39:02.450 Atomic Write Unit (PFail): 8 00:39:02.450 Preferred Write Granularity: 8 00:39:02.450 Atomic Compare & Write Unit: 8 00:39:02.450 Atomic Boundary Size (Normal): 0 00:39:02.450 Atomic Boundary Size (PFail): 0 00:39:02.450 Atomic Boundary Offset: 0 00:39:02.450 NGUID/EUI64 Never Reused: No 00:39:02.450 ANA group ID: 1 00:39:02.450 Namespace Write Protected: No 00:39:02.450 Number of LBA Formats: 1 00:39:02.450 Current LBA Format: LBA Format #00 00:39:02.450 LBA Format #00: Data Size: 512 Metadata Size: 0 00:39:02.450 00:39:02.450 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:39:02.450 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:02.450 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:39:02.451 rmmod nvme_rdma 00:39:02.451 rmmod nvme_fabrics 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:39:02.451 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:39:02.711 14:06:55 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:39:06.007 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:06.007 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:06.007 00:39:06.007 real 0m16.067s 00:39:06.007 user 0m4.966s 00:39:06.007 sys 0m10.141s 00:39:06.008 14:06:58 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:06.008 14:06:58 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:39:06.008 ************************************ 00:39:06.008 END TEST nvmf_identify_kernel_target 00:39:06.008 ************************************ 00:39:06.268 14:06:58 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:39:06.268 14:06:58 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:39:06.268 14:06:58 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:06.268 14:06:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:39:06.268 ************************************ 00:39:06.268 START TEST nvmf_auth_host 00:39:06.268 ************************************ 00:39:06.268 14:06:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:39:06.268 * Looking for test storage... 00:39:06.268 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:39:06.268 14:06:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:39:06.269 14:06:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:39:14.402 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:39:14.402 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:39:14.402 Found net devices under 0000:98:00.0: mlx_0_0 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:14.402 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:39:14.403 Found net devices under 0000:98:00.1: mlx_0_1 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:39:14.403 14:07:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:39:14.403 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:39:14.403 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:39:14.403 altname enp152s0f0np0 00:39:14.403 altname ens817f0np0 00:39:14.403 inet 192.168.100.8/24 scope global mlx_0_0 00:39:14.403 valid_lft forever preferred_lft forever 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:39:14.403 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:39:14.403 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:39:14.403 altname enp152s0f1np1 00:39:14.403 altname ens817f1np1 00:39:14.403 inet 192.168.100.9/24 scope global mlx_0_1 00:39:14.403 valid_lft forever preferred_lft forever 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:39:14.403 192.168.100.9' 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:39:14.403 192.168.100.9' 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:39:14.403 192.168.100.9' 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:39:14.403 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2390643 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2390643 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 2390643 ']' 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:14.404 14:07:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4bcdcfc257b20091cf72b390779eba5e 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.QqW 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4bcdcfc257b20091cf72b390779eba5e 0 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4bcdcfc257b20091cf72b390779eba5e 0 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4bcdcfc257b20091cf72b390779eba5e 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.QqW 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.QqW 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.QqW 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a7d972e5657ab16b83f0d56bdca4ec1a1fc18d1d6a9d31b7cec47854ed218520 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.TS4 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a7d972e5657ab16b83f0d56bdca4ec1a1fc18d1d6a9d31b7cec47854ed218520 3 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a7d972e5657ab16b83f0d56bdca4ec1a1fc18d1d6a9d31b7cec47854ed218520 3 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a7d972e5657ab16b83f0d56bdca4ec1a1fc18d1d6a9d31b7cec47854ed218520 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.TS4 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.TS4 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.TS4 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a5393d70e338a4058bb162c98a1226564123594b1f068242 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.SlO 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a5393d70e338a4058bb162c98a1226564123594b1f068242 0 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a5393d70e338a4058bb162c98a1226564123594b1f068242 0 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a5393d70e338a4058bb162c98a1226564123594b1f068242 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.SlO 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.SlO 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.SlO 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d919c61adcf7534c8bb18b196097e3043013d5c313195060 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.QYY 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d919c61adcf7534c8bb18b196097e3043013d5c313195060 2 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d919c61adcf7534c8bb18b196097e3043013d5c313195060 2 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d919c61adcf7534c8bb18b196097e3043013d5c313195060 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.QYY 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.QYY 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.QYY 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8c6c5db6c4a548aa2f46ab576cf4e3db 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.F3R 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8c6c5db6c4a548aa2f46ab576cf4e3db 1 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8c6c5db6c4a548aa2f46ab576cf4e3db 1 00:39:14.404 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:14.405 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:14.405 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8c6c5db6c4a548aa2f46ab576cf4e3db 00:39:14.405 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:39:14.405 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:14.405 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.F3R 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.F3R 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.F3R 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=09918d4a31879a7b4731954148f0a046 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7F2 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 09918d4a31879a7b4731954148f0a046 1 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 09918d4a31879a7b4731954148f0a046 1 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=09918d4a31879a7b4731954148f0a046 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7F2 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7F2 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.7F2 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bb69da9d308fa651d04dc238278ff78116ee66e466d6b866 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.15N 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bb69da9d308fa651d04dc238278ff78116ee66e466d6b866 2 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bb69da9d308fa651d04dc238278ff78116ee66e466d6b866 2 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bb69da9d308fa651d04dc238278ff78116ee66e466d6b866 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.15N 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.15N 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.15N 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fe5b709332a858ef873c6278d477d8bf 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fQ1 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fe5b709332a858ef873c6278d477d8bf 0 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fe5b709332a858ef873c6278d477d8bf 0 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fe5b709332a858ef873c6278d477d8bf 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fQ1 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fQ1 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fQ1 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e2ecec2f863c6cad076c73d5b3d96365bf226f8b5d1e4509fe0be57829d01f3e 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.emd 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e2ecec2f863c6cad076c73d5b3d96365bf226f8b5d1e4509fe0be57829d01f3e 3 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e2ecec2f863c6cad076c73d5b3d96365bf226f8b5d1e4509fe0be57829d01f3e 3 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:39:14.666 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e2ecec2f863c6cad076c73d5b3d96365bf226f8b5d1e4509fe0be57829d01f3e 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.emd 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.emd 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.emd 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2390643 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 2390643 ']' 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:14.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:14.667 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QqW 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.TS4 ]] 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TS4 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.SlO 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.QYY ]] 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QYY 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.F3R 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.7F2 ]] 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7F2 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.15N 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fQ1 ]] 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fQ1 00:39:14.928 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.emd 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:39:14.929 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:39:15.189 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:15.189 14:07:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:39:18.489 Waiting for block devices as requested 00:39:18.489 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:18.489 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:18.489 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:18.489 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:18.489 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:18.489 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:18.750 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:18.750 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:18.750 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:19.011 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:19.011 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:19.271 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:19.271 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:19.271 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:19.271 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:19.533 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:19.533 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:20.106 14:07:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:20.107 No valid GPT data, bailing 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:39:20.107 14:07:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:20.107 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:39:20.107 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:39:20.107 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:39:20.107 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:39:20.107 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:39:20.107 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:39:20.107 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:39:20.107 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:39:20.107 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:20.370 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -t rdma -s 4420 00:39:20.370 00:39:20.370 Discovery Log Number of Records 2, Generation counter 2 00:39:20.370 =====Discovery Log Entry 0====== 00:39:20.370 trtype: rdma 00:39:20.371 adrfam: ipv4 00:39:20.371 subtype: current discovery subsystem 00:39:20.371 treq: not specified, sq flow control disable supported 00:39:20.371 portid: 1 00:39:20.371 trsvcid: 4420 00:39:20.371 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:20.371 traddr: 192.168.100.8 00:39:20.371 eflags: none 00:39:20.371 rdma_prtype: not specified 00:39:20.371 rdma_qptype: connected 00:39:20.371 rdma_cms: rdma-cm 00:39:20.371 rdma_pkey: 0x0000 00:39:20.371 =====Discovery Log Entry 1====== 00:39:20.371 trtype: rdma 00:39:20.371 adrfam: ipv4 00:39:20.371 subtype: nvme subsystem 00:39:20.371 treq: not specified, sq flow control disable supported 00:39:20.371 portid: 1 00:39:20.371 trsvcid: 4420 00:39:20.371 subnqn: nqn.2024-02.io.spdk:cnode0 00:39:20.371 traddr: 192.168.100.8 00:39:20.371 eflags: none 00:39:20.371 rdma_prtype: not specified 00:39:20.371 rdma_qptype: connected 00:39:20.371 rdma_cms: rdma-cm 00:39:20.371 rdma_pkey: 0x0000 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:20.371 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.632 nvme0n1 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:20.632 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.893 nvme0n1 00:39:20.894 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:20.894 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:20.894 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:20.894 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:20.894 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.894 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:20.894 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:20.894 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:20.894 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:20.894 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:21.154 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.155 14:07:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.155 nvme0n1 00:39:21.155 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.416 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.677 nvme0n1 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.677 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.938 nvme0n1 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:21.938 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.939 14:07:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.200 nvme0n1 00:39:22.200 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.200 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:22.200 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:22.200 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.200 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.200 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.200 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:22.200 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:22.200 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.200 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.461 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 nvme0n1 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.721 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.982 nvme0n1 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:22.982 14:07:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.242 nvme0n1 00:39:23.242 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:23.242 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:23.242 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:23.242 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:23.502 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.762 nvme0n1 00:39:23.762 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:23.762 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:23.762 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:23.762 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:23.762 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.762 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:23.762 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:23.762 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:23.762 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:23.762 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.762 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:23.763 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.023 nvme0n1 00:39:24.023 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:24.023 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:24.023 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:24.023 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:24.023 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.023 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:24.284 14:07:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.545 nvme0n1 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.545 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:24.806 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.067 nvme0n1 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:25.067 14:07:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.639 nvme0n1 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:25.639 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.900 nvme0n1 00:39:25.900 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:25.900 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:25.900 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:25.900 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:25.900 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.900 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:25.900 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:25.900 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:25.900 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:25.900 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:26.161 14:07:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.443 nvme0n1 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:26.443 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.051 nvme0n1 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.051 14:07:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.622 nvme0n1 00:39:27.622 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.622 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:27.622 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:27.622 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.622 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.622 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.622 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:27.622 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:27.622 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.622 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:27.882 14:07:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 nvme0n1 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:28.454 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.026 nvme0n1 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:29.026 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:29.027 14:07:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.598 nvme0n1 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.598 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:29.599 14:07:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.540 nvme0n1 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:30.540 14:07:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.481 nvme0n1 00:39:31.481 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.481 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:31.481 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:31.481 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.481 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.481 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.481 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:31.481 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:31.481 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.481 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.481 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.482 14:07:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.424 nvme0n1 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:32.424 14:07:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.366 nvme0n1 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:33.366 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.309 nvme0n1 00:39:34.309 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.309 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:34.309 14:07:26 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:34.309 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.309 14:07:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:34.309 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:34.310 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:34.310 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:34.310 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:34.310 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:34.310 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.310 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.579 nvme0n1 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.579 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.844 nvme0n1 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:34.844 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:34.845 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.105 nvme0n1 00:39:35.105 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.105 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:35.105 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.105 14:07:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:35.105 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.105 14:07:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:35.366 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.367 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.627 nvme0n1 00:39:35.627 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.627 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:35.627 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:35.627 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.627 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.627 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.627 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:35.627 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:35.627 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.627 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.628 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.888 nvme0n1 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:35.888 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.149 nvme0n1 00:39:36.149 14:07:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.149 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:36.149 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:36.149 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.149 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.149 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.149 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:36.149 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:36.149 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.149 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.410 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.670 nvme0n1 00:39:36.670 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.670 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:36.670 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:36.670 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.670 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.670 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.671 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.932 nvme0n1 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:36.932 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:36.933 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:37.193 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:37.193 14:07:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:37.193 14:07:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:37.193 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.193 14:07:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.454 nvme0n1 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:37.454 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.455 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.716 nvme0n1 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:37.716 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.976 nvme0n1 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.237 14:07:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.498 nvme0n1 00:39:38.498 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.498 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:38.498 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.498 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:38.498 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.498 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.498 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:38.498 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:38.498 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.498 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.760 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.022 nvme0n1 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:39.022 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.023 14:07:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.594 nvme0n1 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:39.594 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:39.595 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:39.595 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:39.595 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:39.595 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:39.595 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:39.595 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:39.595 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:39.595 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:39.595 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.595 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.856 nvme0n1 00:39:39.856 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:39.856 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:39.856 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:39.856 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:39.856 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.856 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.118 14:07:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.690 nvme0n1 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:40.690 14:07:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.261 nvme0n1 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.261 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.833 nvme0n1 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:41.833 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:41.834 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:41.834 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:41.834 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:42.093 14:07:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:42.094 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:42.094 14:07:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.663 nvme0n1 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:42.663 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.234 nvme0n1 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:43.234 14:07:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:43.234 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:43.234 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:43.234 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.178 nvme0n1 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:44.178 14:07:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.120 nvme0n1 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:45.120 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:45.121 14:07:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.063 nvme0n1 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:46.063 14:07:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.634 nvme0n1 00:39:46.634 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:46.895 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:46.896 14:07:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.837 nvme0n1 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:47.837 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:47.838 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.098 nvme0n1 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:48.098 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.099 14:07:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.358 nvme0n1 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:48.358 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.359 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.620 nvme0n1 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.620 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.880 nvme0n1 00:39:48.881 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.881 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:48.881 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:48.881 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.881 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:48.881 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.881 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:48.881 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:48.881 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.881 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.142 14:07:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.142 nvme0n1 00:39:49.142 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.404 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.665 nvme0n1 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:49.665 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:49.666 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:49.666 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.666 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.925 nvme0n1 00:39:49.925 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:49.925 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:49.925 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:49.925 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:49.925 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.925 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.186 14:07:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.447 nvme0n1 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.447 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.708 nvme0n1 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.708 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.969 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.237 nvme0n1 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:51.237 14:07:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.567 nvme0n1 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:51.567 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:51.829 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:51.829 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:51.829 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:51.829 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:51.829 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:51.829 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:51.829 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.091 nvme0n1 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:52.091 14:07:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.663 nvme0n1 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:52.663 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.925 nvme0n1 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:52.925 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:53.186 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:53.186 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:53.186 14:07:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:53.186 14:07:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:53.186 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:53.186 14:07:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.447 nvme0n1 00:39:53.447 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:53.448 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.021 nvme0n1 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:54.021 14:07:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:54.282 14:07:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:54.283 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:54.283 14:07:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.854 nvme0n1 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:54.854 14:07:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.425 nvme0n1 00:39:55.425 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:55.425 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:55.426 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.999 nvme0n1 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:55.999 14:07:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.571 nvme0n1 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:56.571 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGJjZGNmYzI1N2IyMDA5MWNmNzJiMzkwNzc5ZWJhNWXm1tdZ: 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: ]] 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTdkOTcyZTU2NTdhYjE2YjgzZjBkNTZiZGNhNGVjMWExZmMxOGQxZDZhOWQzMWI3Y2VjNDc4NTRlZDIxODUyMFZM6EQ=: 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.572 14:07:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.516 nvme0n1 00:39:57.516 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:57.516 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:57.516 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:57.516 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:57.516 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.516 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:57.516 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:57.516 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:57.517 14:07:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:58.461 nvme0n1 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGM2YzVkYjZjNGE1NDhhYTJmNDZhYjU3NmNmNGUzZGIiUlwn: 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: ]] 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDk5MThkNGEzMTg3OWE3YjQ3MzE5NTQxNDhmMGEwNDamIJrQ: 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:58.461 14:07:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:59.407 nvme0n1 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI2OWRhOWQzMDhmYTY1MWQwNGRjMjM4Mjc4ZmY3ODExNmVlNjZlNDY2ZDZiODY2/ZeZWw==: 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: ]] 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmU1YjcwOTMzMmE4NThlZjg3M2M2Mjc4ZDQ3N2Q4Ymalx3Ry: 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:59.407 14:07:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:00.350 nvme0n1 00:40:00.350 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:00.350 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:00.350 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:00.350 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:00.350 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:00.350 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:00.350 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:00.350 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:00.350 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:00.350 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:00.350 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJlY2VjMmY4NjNjNmNhZDA3NmM3M2Q1YjNkOTYzNjViZjIyNmY4YjVkMWU0NTA5ZmUwYmU1NzgyOWQwMWYzZSTmWko=: 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:00.351 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.295 nvme0n1 00:40:01.295 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.295 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:40:01.295 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.295 14:07:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:40:01.295 14:07:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzOTNkNzBlMzM4YTQwNThiYjE2MmM5OGExMjI2NTY0MTIzNTk0YjFmMDY4MjQyadebUQ==: 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: ]] 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDkxOWM2MWFkY2Y3NTM0YzhiYjE4YjE5NjA5N2UzMDQzMDEzZDVjMzEzMTk1MDYwU+S1VQ==: 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:01.295 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.296 request: 00:40:01.296 { 00:40:01.296 "name": "nvme0", 00:40:01.296 "trtype": "rdma", 00:40:01.296 "traddr": "192.168.100.8", 00:40:01.296 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:40:01.296 "adrfam": "ipv4", 00:40:01.296 "trsvcid": "4420", 00:40:01.296 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:40:01.296 "method": "bdev_nvme_attach_controller", 00:40:01.296 "req_id": 1 00:40:01.296 } 00:40:01.296 Got JSON-RPC error response 00:40:01.296 response: 00:40:01.296 { 00:40:01.296 "code": -5, 00:40:01.296 "message": "Input/output error" 00:40:01.296 } 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.296 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.557 request: 00:40:01.557 { 00:40:01.557 "name": "nvme0", 00:40:01.557 "trtype": "rdma", 00:40:01.557 "traddr": "192.168.100.8", 00:40:01.557 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:40:01.557 "adrfam": "ipv4", 00:40:01.557 "trsvcid": "4420", 00:40:01.557 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:40:01.557 "dhchap_key": "key2", 00:40:01.557 "method": "bdev_nvme_attach_controller", 00:40:01.557 "req_id": 1 00:40:01.557 } 00:40:01.557 Got JSON-RPC error response 00:40:01.557 response: 00:40:01.557 { 00:40:01.557 "code": -5, 00:40:01.557 "message": "Input/output error" 00:40:01.557 } 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:40:01.557 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:01.558 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:01.819 request: 00:40:01.819 { 00:40:01.819 "name": "nvme0", 00:40:01.819 "trtype": "rdma", 00:40:01.819 "traddr": "192.168.100.8", 00:40:01.819 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:40:01.819 "adrfam": "ipv4", 00:40:01.819 "trsvcid": "4420", 00:40:01.819 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:40:01.819 "dhchap_key": "key1", 00:40:01.819 "dhchap_ctrlr_key": "ckey2", 00:40:01.819 "method": "bdev_nvme_attach_controller", 00:40:01.819 "req_id": 1 00:40:01.819 } 00:40:01.819 Got JSON-RPC error response 00:40:01.819 response: 00:40:01.819 { 00:40:01.819 "code": -5, 00:40:01.819 "message": "Input/output error" 00:40:01.819 } 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:40:01.819 rmmod nvme_rdma 00:40:01.819 rmmod nvme_fabrics 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2390643 ']' 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2390643 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 2390643 ']' 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 2390643 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:01.819 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2390643 00:40:01.820 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:40:01.820 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:40:01.820 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2390643' 00:40:01.820 killing process with pid 2390643 00:40:01.820 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 2390643 00:40:01.820 14:07:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 2390643 00:40:01.820 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:01.820 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:40:01.820 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:40:01.820 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:40:02.081 14:07:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:40:02.081 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:40:02.081 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:40:02.081 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:40:02.081 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:40:02.081 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:02.081 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:40:02.081 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:40:02.081 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:40:02.081 14:07:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:40:05.385 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:05.385 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:05.647 14:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.QqW /tmp/spdk.key-null.SlO /tmp/spdk.key-sha256.F3R /tmp/spdk.key-sha384.15N /tmp/spdk.key-sha512.emd /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:40:05.647 14:07:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:40:08.947 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:40:08.947 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:08.947 00:40:08.947 real 1m2.618s 00:40:08.947 user 0m58.047s 00:40:08.947 sys 0m14.461s 00:40:08.947 14:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:08.947 14:08:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:40:08.947 ************************************ 00:40:08.947 END TEST nvmf_auth_host 00:40:08.947 ************************************ 00:40:08.947 14:08:01 nvmf_rdma -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:40:08.947 14:08:01 nvmf_rdma -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:40:08.947 14:08:01 nvmf_rdma -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:40:08.947 14:08:01 nvmf_rdma -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:40:08.947 14:08:01 nvmf_rdma -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:40:08.947 14:08:01 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:40:08.947 14:08:01 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:08.947 14:08:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:40:08.947 ************************************ 00:40:08.947 START TEST nvmf_bdevperf 00:40:08.947 ************************************ 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:40:08.947 * Looking for test storage... 00:40:08.947 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:40:08.947 14:08:01 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:40:08.948 14:08:01 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:15.528 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:15.528 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:40:15.528 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:15.528 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:15.528 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:15.528 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:15.528 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:40:15.529 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:40:15.529 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:40:15.529 Found net devices under 0000:98:00.0: mlx_0_0 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:40:15.529 Found net devices under 0000:98:00.1: mlx_0_1 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:40:15.529 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:40:15.790 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:40:15.790 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:40:15.790 altname enp152s0f0np0 00:40:15.790 altname ens817f0np0 00:40:15.790 inet 192.168.100.8/24 scope global mlx_0_0 00:40:15.790 valid_lft forever preferred_lft forever 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:40:15.790 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:40:15.790 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:40:15.790 altname enp152s0f1np1 00:40:15.790 altname ens817f1np1 00:40:15.790 inet 192.168.100.9/24 scope global mlx_0_1 00:40:15.790 valid_lft forever preferred_lft forever 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:40:15.790 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:40:15.791 192.168.100.9' 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:40:15.791 192.168.100.9' 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:40:15.791 192.168.100.9' 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2408039 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2408039 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 2408039 ']' 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:15.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:15.791 14:08:08 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:15.791 [2024-06-11 14:08:08.697383] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:40:15.791 [2024-06-11 14:08:08.697435] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:16.051 EAL: No free 2048 kB hugepages reported on node 1 00:40:16.051 [2024-06-11 14:08:08.775622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:16.051 [2024-06-11 14:08:08.841693] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:16.051 [2024-06-11 14:08:08.841729] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:16.051 [2024-06-11 14:08:08.841737] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:16.051 [2024-06-11 14:08:08.841744] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:16.051 [2024-06-11 14:08:08.841750] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:16.051 [2024-06-11 14:08:08.841853] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:40:16.051 [2024-06-11 14:08:08.842010] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.051 [2024-06-11 14:08:08.842012] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:40:16.622 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:16.622 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:40:16.622 14:08:09 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:16.622 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:16.622 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:16.622 14:08:09 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:16.622 14:08:09 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:40:16.622 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.622 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:16.882 [2024-06-11 14:08:09.534869] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f2d5d0/0x1f31ac0) succeed. 00:40:16.882 [2024-06-11 14:08:09.548438] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f2eb70/0x1f73150) succeed. 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:16.882 Malloc0 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:16.882 [2024-06-11 14:08:09.714981] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:16.882 14:08:09 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:16.882 { 00:40:16.882 "params": { 00:40:16.882 "name": "Nvme$subsystem", 00:40:16.882 "trtype": "$TEST_TRANSPORT", 00:40:16.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:16.882 "adrfam": "ipv4", 00:40:16.882 "trsvcid": "$NVMF_PORT", 00:40:16.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:16.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:16.883 "hdgst": ${hdgst:-false}, 00:40:16.883 "ddgst": ${ddgst:-false} 00:40:16.883 }, 00:40:16.883 "method": "bdev_nvme_attach_controller" 00:40:16.883 } 00:40:16.883 EOF 00:40:16.883 )") 00:40:16.883 14:08:09 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:40:16.883 14:08:09 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:40:16.883 14:08:09 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:40:16.883 14:08:09 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:16.883 "params": { 00:40:16.883 "name": "Nvme1", 00:40:16.883 "trtype": "rdma", 00:40:16.883 "traddr": "192.168.100.8", 00:40:16.883 "adrfam": "ipv4", 00:40:16.883 "trsvcid": "4420", 00:40:16.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:16.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:16.883 "hdgst": false, 00:40:16.883 "ddgst": false 00:40:16.883 }, 00:40:16.883 "method": "bdev_nvme_attach_controller" 00:40:16.883 }' 00:40:16.883 [2024-06-11 14:08:09.765478] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:40:16.883 [2024-06-11 14:08:09.765531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408240 ] 00:40:16.883 EAL: No free 2048 kB hugepages reported on node 1 00:40:17.143 [2024-06-11 14:08:09.825152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.143 [2024-06-11 14:08:09.890183] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:17.403 Running I/O for 1 seconds... 00:40:18.343 00:40:18.343 Latency(us) 00:40:18.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:18.343 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:18.343 Verification LBA range: start 0x0 length 0x4000 00:40:18.343 Nvme1n1 : 1.01 14335.70 56.00 0.00 0.00 8868.36 2703.36 22609.92 00:40:18.343 =================================================================================================================== 00:40:18.343 Total : 14335.70 56.00 0.00 0.00 8868.36 2703.36 22609.92 00:40:18.343 14:08:11 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2408457 00:40:18.343 14:08:11 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:40:18.343 14:08:11 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:40:18.343 14:08:11 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:40:18.343 14:08:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:40:18.343 14:08:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:40:18.343 14:08:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:18.343 14:08:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:18.343 { 00:40:18.343 "params": { 00:40:18.343 "name": "Nvme$subsystem", 00:40:18.343 "trtype": "$TEST_TRANSPORT", 00:40:18.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:18.343 "adrfam": "ipv4", 00:40:18.343 "trsvcid": "$NVMF_PORT", 00:40:18.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:18.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:18.343 "hdgst": ${hdgst:-false}, 00:40:18.343 "ddgst": ${ddgst:-false} 00:40:18.343 }, 00:40:18.343 "method": "bdev_nvme_attach_controller" 00:40:18.343 } 00:40:18.343 EOF 00:40:18.343 )") 00:40:18.343 14:08:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:40:18.343 14:08:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:40:18.604 14:08:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:40:18.604 14:08:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:18.604 "params": { 00:40:18.604 "name": "Nvme1", 00:40:18.604 "trtype": "rdma", 00:40:18.604 "traddr": "192.168.100.8", 00:40:18.604 "adrfam": "ipv4", 00:40:18.604 "trsvcid": "4420", 00:40:18.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:18.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:18.604 "hdgst": false, 00:40:18.604 "ddgst": false 00:40:18.604 }, 00:40:18.604 "method": "bdev_nvme_attach_controller" 00:40:18.604 }' 00:40:18.604 [2024-06-11 14:08:11.288625] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:40:18.604 [2024-06-11 14:08:11.288682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2408457 ] 00:40:18.604 EAL: No free 2048 kB hugepages reported on node 1 00:40:18.604 [2024-06-11 14:08:11.347971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:18.604 [2024-06-11 14:08:11.412035] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.865 Running I/O for 15 seconds... 00:40:21.408 14:08:14 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2408039 00:40:21.408 14:08:14 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:40:22.796 [2024-06-11 14:08:15.274070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.796 [2024-06-11 14:08:15.274572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.796 [2024-06-11 14:08:15.274579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.274986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.274996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.275003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.275023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.275040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.275057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.275073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.275089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.275105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.275121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:22.797 [2024-06-11 14:08:15.275137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182800 00:40:22.797 [2024-06-11 14:08:15.275155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182800 00:40:22.797 [2024-06-11 14:08:15.275174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x182800 00:40:22.797 [2024-06-11 14:08:15.275190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182800 00:40:22.797 [2024-06-11 14:08:15.275206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.797 [2024-06-11 14:08:15.275216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182800 00:40:22.798 [2024-06-11 14:08:15.275820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.798 [2024-06-11 14:08:15.275829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.275836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.275846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.275853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.275862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.275869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.275878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.275886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.275895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.275902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.275911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.275918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.275927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.275936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.275946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.275953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.275962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.275969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.275979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.275986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.275995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.276235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182800 00:40:22.799 [2024-06-11 14:08:15.276242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32767 cdw0:3eff200 sqhd:bca0 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.278464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:22.799 [2024-06-11 14:08:15.278477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:22.799 [2024-06-11 14:08:15.278485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96776 len:8 PRP1 0x0 PRP2 0x0 00:40:22.799 [2024-06-11 14:08:15.278493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:22.799 [2024-06-11 14:08:15.278524] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:40:22.799 [2024-06-11 14:08:15.282118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:22.799 [2024-06-11 14:08:15.301900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:40:22.799 [2024-06-11 14:08:15.306267] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:22.799 [2024-06-11 14:08:15.306285] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:22.799 [2024-06-11 14:08:15.306292] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:40:23.742 [2024-06-11 14:08:16.310680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:40:23.742 [2024-06-11 14:08:16.310701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:23.742 [2024-06-11 14:08:16.310917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:23.742 [2024-06-11 14:08:16.310926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:23.742 [2024-06-11 14:08:16.310934] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:40:23.742 [2024-06-11 14:08:16.310946] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:23.742 [2024-06-11 14:08:16.314438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:23.742 [2024-06-11 14:08:16.324731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:23.742 [2024-06-11 14:08:16.328514] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:23.742 [2024-06-11 14:08:16.328531] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:23.742 [2024-06-11 14:08:16.328537] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:40:24.735 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2408039 Killed "${NVMF_APP[@]}" "$@" 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2409746 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2409746 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 2409746 ']' 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:24.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:24.735 14:08:17 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:24.735 [2024-06-11 14:08:17.306033] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:40:24.735 [2024-06-11 14:08:17.306086] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:24.735 [2024-06-11 14:08:17.332936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:40:24.735 [2024-06-11 14:08:17.332955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:24.735 [2024-06-11 14:08:17.333177] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:24.735 [2024-06-11 14:08:17.333188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:24.735 [2024-06-11 14:08:17.333196] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:40:24.735 [2024-06-11 14:08:17.333628] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:24.735 EAL: No free 2048 kB hugepages reported on node 1 00:40:24.735 [2024-06-11 14:08:17.336714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:24.735 [2024-06-11 14:08:17.347437] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:24.735 [2024-06-11 14:08:17.351138] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:24.735 [2024-06-11 14:08:17.351155] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:24.735 [2024-06-11 14:08:17.351162] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:40:24.735 [2024-06-11 14:08:17.384792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:24.735 [2024-06-11 14:08:17.438502] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:24.735 [2024-06-11 14:08:17.438536] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:24.735 [2024-06-11 14:08:17.438541] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:24.735 [2024-06-11 14:08:17.438545] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:24.735 [2024-06-11 14:08:17.438549] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:24.735 [2024-06-11 14:08:17.438649] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:40:24.735 [2024-06-11 14:08:17.438807] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:24.735 [2024-06-11 14:08:17.438810] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:40:25.325 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:25.325 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:40:25.325 14:08:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:25.325 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:25.325 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:25.325 14:08:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:25.325 14:08:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:40:25.325 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:25.325 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:25.325 [2024-06-11 14:08:18.154403] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15935d0/0x1597ac0) succeed. 00:40:25.325 [2024-06-11 14:08:18.165179] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1594b70/0x15d9150) succeed. 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:25.585 Malloc0 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:25.585 [2024-06-11 14:08:18.309004] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:25.585 14:08:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2408457 00:40:25.585 [2024-06-11 14:08:18.355681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:40:25.585 [2024-06-11 14:08:18.355703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:25.585 [2024-06-11 14:08:18.355919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:25.585 [2024-06-11 14:08:18.355928] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:25.585 [2024-06-11 14:08:18.355938] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:40:25.585 [2024-06-11 14:08:18.355951] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:40:25.585 [2024-06-11 14:08:18.359448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:25.586 [2024-06-11 14:08:18.369744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:25.586 [2024-06-11 14:08:18.429201] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:33.722 00:40:33.722 Latency(us) 00:40:33.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:33.722 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:33.722 Verification LBA range: start 0x0 length 0x4000 00:40:33.722 Nvme1n1 : 15.01 12210.95 47.70 7925.35 0.00 6330.52 353.28 1034594.99 00:40:33.722 =================================================================================================================== 00:40:33.722 Total : 12210.95 47.70 7925.35 0.00 6330.52 353.28 1034594.99 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:40:33.984 rmmod nvme_rdma 00:40:33.984 rmmod nvme_fabrics 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2409746 ']' 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2409746 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 2409746 ']' 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 2409746 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:33.984 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2409746 00:40:34.245 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:40:34.245 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:40:34.245 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2409746' 00:40:34.245 killing process with pid 2409746 00:40:34.245 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 2409746 00:40:34.245 14:08:26 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 2409746 00:40:34.245 14:08:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:34.245 14:08:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:40:34.245 00:40:34.245 real 0m25.437s 00:40:34.245 user 1m4.041s 00:40:34.245 sys 0m6.008s 00:40:34.245 14:08:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:34.245 14:08:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:34.245 ************************************ 00:40:34.245 END TEST nvmf_bdevperf 00:40:34.245 ************************************ 00:40:34.245 14:08:27 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:40:34.246 14:08:27 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:40:34.246 14:08:27 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:34.246 14:08:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:40:34.507 ************************************ 00:40:34.507 START TEST nvmf_target_disconnect 00:40:34.507 ************************************ 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:40:34.507 * Looking for test storage... 00:40:34.507 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:34.507 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:40:34.508 14:08:27 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:40:41.097 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:40:41.097 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:40:41.097 Found net devices under 0000:98:00.0: mlx_0_0 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:40:41.097 Found net devices under 0000:98:00.1: mlx_0_1 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:40:41.097 14:08:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:40:41.359 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:40:41.359 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:40:41.359 altname enp152s0f0np0 00:40:41.359 altname ens817f0np0 00:40:41.359 inet 192.168.100.8/24 scope global mlx_0_0 00:40:41.359 valid_lft forever preferred_lft forever 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:41.359 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:40:41.360 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:40:41.360 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:40:41.360 altname enp152s0f1np1 00:40:41.360 altname ens817f1np1 00:40:41.360 inet 192.168.100.9/24 scope global mlx_0_1 00:40:41.360 valid_lft forever preferred_lft forever 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:40:41.360 192.168.100.9' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:40:41.360 192.168.100.9' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:40:41.360 192.168.100.9' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:41.360 ************************************ 00:40:41.360 START TEST nvmf_target_disconnect_tc1 00:40:41.360 ************************************ 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:40:41.360 14:08:34 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:40:41.360 EAL: No free 2048 kB hugepages reported on node 1 00:40:41.622 [2024-06-11 14:08:34.287422] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:41.622 [2024-06-11 14:08:34.287518] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:41.622 [2024-06-11 14:08:34.287552] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:40:42.565 [2024-06-11 14:08:35.292094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:40:42.565 [2024-06-11 14:08:35.292144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:40:42.565 [2024-06-11 14:08:35.292169] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:40:42.565 [2024-06-11 14:08:35.292225] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:42.565 [2024-06-11 14:08:35.292246] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:40:42.565 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:40:42.565 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:40:42.565 Initializing NVMe Controllers 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:40:42.565 00:40:42.565 real 0m1.132s 00:40:42.565 user 0m0.953s 00:40:42.565 sys 0m0.161s 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:40:42.565 ************************************ 00:40:42.565 END TEST nvmf_target_disconnect_tc1 00:40:42.565 ************************************ 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:42.565 ************************************ 00:40:42.565 START TEST nvmf_target_disconnect_tc2 00:40:42.565 ************************************ 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2415511 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2415511 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2415511 ']' 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:42.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:42.565 14:08:35 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:42.565 [2024-06-11 14:08:35.438328] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:40:42.565 [2024-06-11 14:08:35.438376] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:42.565 EAL: No free 2048 kB hugepages reported on node 1 00:40:42.827 [2024-06-11 14:08:35.514444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:42.827 [2024-06-11 14:08:35.588386] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:42.827 [2024-06-11 14:08:35.588431] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:42.827 [2024-06-11 14:08:35.588439] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:42.827 [2024-06-11 14:08:35.588445] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:42.827 [2024-06-11 14:08:35.588451] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:42.827 [2024-06-11 14:08:35.588609] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:40:42.827 [2024-06-11 14:08:35.588763] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:40:42.827 [2024-06-11 14:08:35.588920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:40:42.827 [2024-06-11 14:08:35.588920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:43.398 Malloc0 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.398 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:43.399 [2024-06-11 14:08:36.304011] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ea3110/0x1eaec80) succeed. 00:40:43.660 [2024-06-11 14:08:36.317476] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ea4750/0x1ef0310) succeed. 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:43.660 [2024-06-11 14:08:36.477899] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2415670 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:40:43.660 14:08:36 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:40:43.660 EAL: No free 2048 kB hugepages reported on node 1 00:40:46.203 14:08:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2415511 00:40:46.203 14:08:38 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Write completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.773 Read completed with error (sct=0, sc=8) 00:40:46.773 starting I/O failed 00:40:46.774 Write completed with error (sct=0, sc=8) 00:40:46.774 starting I/O failed 00:40:46.774 Write completed with error (sct=0, sc=8) 00:40:46.774 starting I/O failed 00:40:46.774 Read completed with error (sct=0, sc=8) 00:40:46.774 starting I/O failed 00:40:46.774 [2024-06-11 14:08:39.673459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:46.774 [2024-06-11 14:08:39.675948] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:46.774 [2024-06-11 14:08:39.675993] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:46.774 [2024-06-11 14:08:39.676012] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:47.715 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2415511 Killed "${NVMF_APP[@]}" "$@" 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2416515 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2416515 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2416515 ']' 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:47.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:47.715 14:08:40 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:47.715 [2024-06-11 14:08:40.560206] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:40:47.715 [2024-06-11 14:08:40.560260] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:47.715 EAL: No free 2048 kB hugepages reported on node 1 00:40:47.975 [2024-06-11 14:08:40.638700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:47.975 [2024-06-11 14:08:40.680441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:47.975 qpair failed and we were unable to recover it. 00:40:47.975 [2024-06-11 14:08:40.682908] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:40:47.975 [2024-06-11 14:08:40.682920] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:40:47.975 [2024-06-11 14:08:40.682925] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:47.975 [2024-06-11 14:08:40.692747] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:47.975 [2024-06-11 14:08:40.692771] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:47.975 [2024-06-11 14:08:40.692776] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:47.975 [2024-06-11 14:08:40.692781] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:47.975 [2024-06-11 14:08:40.692785] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:47.975 [2024-06-11 14:08:40.692932] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:40:47.975 [2024-06-11 14:08:40.693064] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:40:47.975 [2024-06-11 14:08:40.693197] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:40:47.975 [2024-06-11 14:08:40.693199] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:48.545 Malloc0 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.545 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:48.545 [2024-06-11 14:08:41.424622] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15b2110/0x15bdc80) succeed. 00:40:48.545 [2024-06-11 14:08:41.435875] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15b3750/0x15ff310) succeed. 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:48.805 [2024-06-11 14:08:41.566715] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:48.805 14:08:41 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2415670 00:40:48.805 [2024-06-11 14:08:41.687340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:48.806 qpair failed and we were unable to recover it. 00:40:48.806 [2024-06-11 14:08:41.697673] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:48.806 [2024-06-11 14:08:41.697715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:48.806 [2024-06-11 14:08:41.697727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:48.806 [2024-06-11 14:08:41.697733] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:48.806 [2024-06-11 14:08:41.697738] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:48.806 [2024-06-11 14:08:41.707207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:48.806 qpair failed and we were unable to recover it. 00:40:49.068 [2024-06-11 14:08:41.717535] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.068 [2024-06-11 14:08:41.717572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.068 [2024-06-11 14:08:41.717583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.068 [2024-06-11 14:08:41.717588] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.068 [2024-06-11 14:08:41.717594] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.068 [2024-06-11 14:08:41.727050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.068 qpair failed and we were unable to recover it. 00:40:49.068 [2024-06-11 14:08:41.737728] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.068 [2024-06-11 14:08:41.737760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.068 [2024-06-11 14:08:41.737771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.068 [2024-06-11 14:08:41.737776] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.068 [2024-06-11 14:08:41.737781] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.068 [2024-06-11 14:08:41.747179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.068 qpair failed and we were unable to recover it. 00:40:49.068 [2024-06-11 14:08:41.757585] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.068 [2024-06-11 14:08:41.757619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.068 [2024-06-11 14:08:41.757629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.068 [2024-06-11 14:08:41.757635] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.068 [2024-06-11 14:08:41.757639] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.068 [2024-06-11 14:08:41.767312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.068 qpair failed and we were unable to recover it. 00:40:49.068 [2024-06-11 14:08:41.777689] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.068 [2024-06-11 14:08:41.777721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.068 [2024-06-11 14:08:41.777741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.068 [2024-06-11 14:08:41.777748] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.068 [2024-06-11 14:08:41.777753] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.068 [2024-06-11 14:08:41.787397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.068 qpair failed and we were unable to recover it. 00:40:49.068 [2024-06-11 14:08:41.797391] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.068 [2024-06-11 14:08:41.797421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.068 [2024-06-11 14:08:41.797433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.068 [2024-06-11 14:08:41.797442] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.068 [2024-06-11 14:08:41.797447] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.068 [2024-06-11 14:08:41.807256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.068 qpair failed and we were unable to recover it. 00:40:49.068 [2024-06-11 14:08:41.818300] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.068 [2024-06-11 14:08:41.818330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.068 [2024-06-11 14:08:41.818340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.068 [2024-06-11 14:08:41.818345] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.068 [2024-06-11 14:08:41.818350] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.068 [2024-06-11 14:08:41.827295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.068 qpair failed and we were unable to recover it. 00:40:49.068 [2024-06-11 14:08:41.837845] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.068 [2024-06-11 14:08:41.837874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.069 [2024-06-11 14:08:41.837884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.069 [2024-06-11 14:08:41.837889] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.069 [2024-06-11 14:08:41.837894] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.069 [2024-06-11 14:08:41.847475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.069 qpair failed and we were unable to recover it. 00:40:49.069 [2024-06-11 14:08:41.857335] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.069 [2024-06-11 14:08:41.857364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.069 [2024-06-11 14:08:41.857374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.069 [2024-06-11 14:08:41.857379] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.069 [2024-06-11 14:08:41.857384] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.069 [2024-06-11 14:08:41.867694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.069 qpair failed and we were unable to recover it. 00:40:49.069 [2024-06-11 14:08:41.878358] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.069 [2024-06-11 14:08:41.878392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.069 [2024-06-11 14:08:41.878412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.069 [2024-06-11 14:08:41.878418] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.069 [2024-06-11 14:08:41.878423] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.069 [2024-06-11 14:08:41.887530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.069 qpair failed and we were unable to recover it. 00:40:49.069 [2024-06-11 14:08:41.898191] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.069 [2024-06-11 14:08:41.898220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.069 [2024-06-11 14:08:41.898231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.069 [2024-06-11 14:08:41.898236] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.069 [2024-06-11 14:08:41.898241] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.069 [2024-06-11 14:08:41.907658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.069 qpair failed and we were unable to recover it. 00:40:49.069 [2024-06-11 14:08:41.917952] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.069 [2024-06-11 14:08:41.917980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.069 [2024-06-11 14:08:41.917989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.069 [2024-06-11 14:08:41.917995] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.069 [2024-06-11 14:08:41.917999] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.069 [2024-06-11 14:08:41.927671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.069 qpair failed and we were unable to recover it. 00:40:49.069 [2024-06-11 14:08:41.938558] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.069 [2024-06-11 14:08:41.938591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.069 [2024-06-11 14:08:41.938600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.069 [2024-06-11 14:08:41.938605] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.069 [2024-06-11 14:08:41.938610] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.069 [2024-06-11 14:08:41.947661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.069 qpair failed and we were unable to recover it. 00:40:49.069 [2024-06-11 14:08:41.957577] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.069 [2024-06-11 14:08:41.957611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.069 [2024-06-11 14:08:41.957620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.069 [2024-06-11 14:08:41.957625] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.069 [2024-06-11 14:08:41.957630] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.069 [2024-06-11 14:08:41.967731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.069 qpair failed and we were unable to recover it. 00:40:49.331 [2024-06-11 14:08:41.978641] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.331 [2024-06-11 14:08:41.978675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.331 [2024-06-11 14:08:41.978687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.331 [2024-06-11 14:08:41.978692] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.331 [2024-06-11 14:08:41.978697] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.331 [2024-06-11 14:08:41.987902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.331 qpair failed and we were unable to recover it. 00:40:49.331 [2024-06-11 14:08:41.997366] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.331 [2024-06-11 14:08:41.997396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.331 [2024-06-11 14:08:41.997405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.331 [2024-06-11 14:08:41.997410] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.331 [2024-06-11 14:08:41.997414] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.331 [2024-06-11 14:08:42.007839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.331 qpair failed and we were unable to recover it. 00:40:49.331 [2024-06-11 14:08:42.018207] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.331 [2024-06-11 14:08:42.018239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.331 [2024-06-11 14:08:42.018248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.331 [2024-06-11 14:08:42.018253] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.331 [2024-06-11 14:08:42.018258] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.331 [2024-06-11 14:08:42.027819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.331 qpair failed and we were unable to recover it. 00:40:49.331 [2024-06-11 14:08:42.038675] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.331 [2024-06-11 14:08:42.038709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.331 [2024-06-11 14:08:42.038719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.331 [2024-06-11 14:08:42.038724] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.331 [2024-06-11 14:08:42.038728] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.331 [2024-06-11 14:08:42.047955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.331 qpair failed and we were unable to recover it. 00:40:49.331 [2024-06-11 14:08:42.058779] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.331 [2024-06-11 14:08:42.058810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.331 [2024-06-11 14:08:42.058820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.331 [2024-06-11 14:08:42.058825] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.331 [2024-06-11 14:08:42.058832] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.331 [2024-06-11 14:08:42.068074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.331 qpair failed and we were unable to recover it. 00:40:49.331 [2024-06-11 14:08:42.078415] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.331 [2024-06-11 14:08:42.078445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.331 [2024-06-11 14:08:42.078455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.331 [2024-06-11 14:08:42.078460] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.331 [2024-06-11 14:08:42.078465] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.331 [2024-06-11 14:08:42.088101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.331 qpair failed and we were unable to recover it. 00:40:49.331 [2024-06-11 14:08:42.098840] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.331 [2024-06-11 14:08:42.098870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.331 [2024-06-11 14:08:42.098880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.331 [2024-06-11 14:08:42.098886] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.331 [2024-06-11 14:08:42.098890] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.331 [2024-06-11 14:08:42.108365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.331 qpair failed and we were unable to recover it. 00:40:49.331 [2024-06-11 14:08:42.118966] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.331 [2024-06-11 14:08:42.119001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.331 [2024-06-11 14:08:42.119010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.331 [2024-06-11 14:08:42.119015] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.331 [2024-06-11 14:08:42.119024] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.331 [2024-06-11 14:08:42.128363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.331 qpair failed and we were unable to recover it. 00:40:49.331 [2024-06-11 14:08:42.138507] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.332 [2024-06-11 14:08:42.138543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.332 [2024-06-11 14:08:42.138552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.332 [2024-06-11 14:08:42.138557] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.332 [2024-06-11 14:08:42.138562] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.332 [2024-06-11 14:08:42.148254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.332 qpair failed and we were unable to recover it. 00:40:49.332 [2024-06-11 14:08:42.158672] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.332 [2024-06-11 14:08:42.158703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.332 [2024-06-11 14:08:42.158713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.332 [2024-06-11 14:08:42.158718] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.332 [2024-06-11 14:08:42.158722] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.332 [2024-06-11 14:08:42.168355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.332 qpair failed and we were unable to recover it. 00:40:49.332 [2024-06-11 14:08:42.179155] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.332 [2024-06-11 14:08:42.179186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.332 [2024-06-11 14:08:42.179196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.332 [2024-06-11 14:08:42.179201] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.332 [2024-06-11 14:08:42.179205] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.332 [2024-06-11 14:08:42.188390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.332 qpair failed and we were unable to recover it. 00:40:49.332 [2024-06-11 14:08:42.199434] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.332 [2024-06-11 14:08:42.199474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.332 [2024-06-11 14:08:42.199494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.332 [2024-06-11 14:08:42.199500] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.332 [2024-06-11 14:08:42.199505] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.332 [2024-06-11 14:08:42.208456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.332 qpair failed and we were unable to recover it. 00:40:49.332 [2024-06-11 14:08:42.219111] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.332 [2024-06-11 14:08:42.219149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.332 [2024-06-11 14:08:42.219160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.332 [2024-06-11 14:08:42.219165] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.332 [2024-06-11 14:08:42.219169] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.332 [2024-06-11 14:08:42.228463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.332 qpair failed and we were unable to recover it. 00:40:49.332 [2024-06-11 14:08:42.239023] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.332 [2024-06-11 14:08:42.239053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.332 [2024-06-11 14:08:42.239072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.332 [2024-06-11 14:08:42.239081] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.332 [2024-06-11 14:08:42.239086] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.593 [2024-06-11 14:08:42.248769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.593 qpair failed and we were unable to recover it. 00:40:49.593 [2024-06-11 14:08:42.259357] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.593 [2024-06-11 14:08:42.259389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.593 [2024-06-11 14:08:42.259399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.593 [2024-06-11 14:08:42.259404] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.593 [2024-06-11 14:08:42.259408] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.593 [2024-06-11 14:08:42.268805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.593 qpair failed and we were unable to recover it. 00:40:49.593 [2024-06-11 14:08:42.279459] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.593 [2024-06-11 14:08:42.279485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.593 [2024-06-11 14:08:42.279495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.593 [2024-06-11 14:08:42.279500] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.593 [2024-06-11 14:08:42.279505] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.593 [2024-06-11 14:08:42.288358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.593 qpair failed and we were unable to recover it. 00:40:49.593 [2024-06-11 14:08:42.299275] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.593 [2024-06-11 14:08:42.299300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.593 [2024-06-11 14:08:42.299310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.594 [2024-06-11 14:08:42.299315] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.594 [2024-06-11 14:08:42.299320] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.594 [2024-06-11 14:08:42.308691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.594 qpair failed and we were unable to recover it. 00:40:49.594 [2024-06-11 14:08:42.319059] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.594 [2024-06-11 14:08:42.319086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.594 [2024-06-11 14:08:42.319095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.594 [2024-06-11 14:08:42.319100] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.594 [2024-06-11 14:08:42.319104] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.594 [2024-06-11 14:08:42.328727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.594 qpair failed and we were unable to recover it. 00:40:49.594 [2024-06-11 14:08:42.339671] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.594 [2024-06-11 14:08:42.339704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.594 [2024-06-11 14:08:42.339724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.594 [2024-06-11 14:08:42.339731] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.594 [2024-06-11 14:08:42.339735] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.594 [2024-06-11 14:08:42.348913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.594 qpair failed and we were unable to recover it. 00:40:49.594 [2024-06-11 14:08:42.359551] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.594 [2024-06-11 14:08:42.359586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.594 [2024-06-11 14:08:42.359597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.594 [2024-06-11 14:08:42.359602] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.594 [2024-06-11 14:08:42.359607] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.594 [2024-06-11 14:08:42.368826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.594 qpair failed and we were unable to recover it. 00:40:49.594 [2024-06-11 14:08:42.379573] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.594 [2024-06-11 14:08:42.379603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.594 [2024-06-11 14:08:42.379613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.594 [2024-06-11 14:08:42.379618] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.594 [2024-06-11 14:08:42.379622] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.594 [2024-06-11 14:08:42.388926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.594 qpair failed and we were unable to recover it. 00:40:49.594 [2024-06-11 14:08:42.399280] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.594 [2024-06-11 14:08:42.399308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.594 [2024-06-11 14:08:42.399318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.594 [2024-06-11 14:08:42.399323] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.594 [2024-06-11 14:08:42.399327] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.594 [2024-06-11 14:08:42.408769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.594 qpair failed and we were unable to recover it. 00:40:49.594 [2024-06-11 14:08:42.419605] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.594 [2024-06-11 14:08:42.419641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.594 [2024-06-11 14:08:42.419654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.594 [2024-06-11 14:08:42.419659] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.594 [2024-06-11 14:08:42.419663] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.594 [2024-06-11 14:08:42.429276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.594 qpair failed and we were unable to recover it. 00:40:49.594 [2024-06-11 14:08:42.440033] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.594 [2024-06-11 14:08:42.440071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.594 [2024-06-11 14:08:42.440091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.594 [2024-06-11 14:08:42.440098] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.594 [2024-06-11 14:08:42.440103] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.594 [2024-06-11 14:08:42.449061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.594 qpair failed and we were unable to recover it. 00:40:49.594 [2024-06-11 14:08:42.460083] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.594 [2024-06-11 14:08:42.460111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.594 [2024-06-11 14:08:42.460131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.594 [2024-06-11 14:08:42.460137] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.594 [2024-06-11 14:08:42.460142] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.594 [2024-06-11 14:08:42.469305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.594 qpair failed and we were unable to recover it. 00:40:49.594 [2024-06-11 14:08:42.479626] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.594 [2024-06-11 14:08:42.479654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.594 [2024-06-11 14:08:42.479664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.594 [2024-06-11 14:08:42.479669] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.594 [2024-06-11 14:08:42.479674] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.594 [2024-06-11 14:08:42.489126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.594 qpair failed and we were unable to recover it. 00:40:49.594 [2024-06-11 14:08:42.499992] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.594 [2024-06-11 14:08:42.500030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.594 [2024-06-11 14:08:42.500040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.594 [2024-06-11 14:08:42.500045] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.594 [2024-06-11 14:08:42.500052] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.856 [2024-06-11 14:08:42.509215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.856 qpair failed and we were unable to recover it. 00:40:49.856 [2024-06-11 14:08:42.520026] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.856 [2024-06-11 14:08:42.520058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.856 [2024-06-11 14:08:42.520067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.856 [2024-06-11 14:08:42.520072] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.856 [2024-06-11 14:08:42.520077] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.856 [2024-06-11 14:08:42.529342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.856 qpair failed and we were unable to recover it. 00:40:49.856 [2024-06-11 14:08:42.540266] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.856 [2024-06-11 14:08:42.540294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.856 [2024-06-11 14:08:42.540304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.856 [2024-06-11 14:08:42.540309] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.856 [2024-06-11 14:08:42.540313] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.856 [2024-06-11 14:08:42.549384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.856 qpair failed and we were unable to recover it. 00:40:49.856 [2024-06-11 14:08:42.559851] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.856 [2024-06-11 14:08:42.559881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.856 [2024-06-11 14:08:42.559901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.856 [2024-06-11 14:08:42.559907] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.856 [2024-06-11 14:08:42.559912] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.856 [2024-06-11 14:08:42.569520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.856 qpair failed and we were unable to recover it. 00:40:49.856 [2024-06-11 14:08:42.580442] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.856 [2024-06-11 14:08:42.580475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.856 [2024-06-11 14:08:42.580486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.856 [2024-06-11 14:08:42.580491] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.856 [2024-06-11 14:08:42.580496] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.856 [2024-06-11 14:08:42.589429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.856 qpair failed and we were unable to recover it. 00:40:49.856 [2024-06-11 14:08:42.600389] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.856 [2024-06-11 14:08:42.600416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.856 [2024-06-11 14:08:42.600426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.856 [2024-06-11 14:08:42.600432] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.856 [2024-06-11 14:08:42.600436] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.856 [2024-06-11 14:08:42.609653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.856 qpair failed and we were unable to recover it. 00:40:49.856 [2024-06-11 14:08:42.620235] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.856 [2024-06-11 14:08:42.620265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.856 [2024-06-11 14:08:42.620274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.856 [2024-06-11 14:08:42.620279] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.856 [2024-06-11 14:08:42.620283] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.856 [2024-06-11 14:08:42.629549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.856 qpair failed and we were unable to recover it. 00:40:49.856 [2024-06-11 14:08:42.639912] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.856 [2024-06-11 14:08:42.639946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.856 [2024-06-11 14:08:42.639956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.856 [2024-06-11 14:08:42.639961] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.856 [2024-06-11 14:08:42.639965] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.856 [2024-06-11 14:08:42.649660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.856 qpair failed and we were unable to recover it. 00:40:49.856 [2024-06-11 14:08:42.660248] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.856 [2024-06-11 14:08:42.660284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.856 [2024-06-11 14:08:42.660294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.856 [2024-06-11 14:08:42.660299] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.856 [2024-06-11 14:08:42.660303] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.856 [2024-06-11 14:08:42.669431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.856 qpair failed and we were unable to recover it. 00:40:49.856 [2024-06-11 14:08:42.680433] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.856 [2024-06-11 14:08:42.680461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.856 [2024-06-11 14:08:42.680481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.856 [2024-06-11 14:08:42.680490] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.856 [2024-06-11 14:08:42.680495] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.856 [2024-06-11 14:08:42.689782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.857 qpair failed and we were unable to recover it. 00:40:49.857 [2024-06-11 14:08:42.700617] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.857 [2024-06-11 14:08:42.700645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.857 [2024-06-11 14:08:42.700656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.857 [2024-06-11 14:08:42.700661] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.857 [2024-06-11 14:08:42.700667] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.857 [2024-06-11 14:08:42.709906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.857 qpair failed and we were unable to recover it. 00:40:49.857 [2024-06-11 14:08:42.720474] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.857 [2024-06-11 14:08:42.720504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.857 [2024-06-11 14:08:42.720523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.857 [2024-06-11 14:08:42.720529] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.857 [2024-06-11 14:08:42.720534] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.857 [2024-06-11 14:08:42.729810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.857 qpair failed and we were unable to recover it. 00:40:49.857 [2024-06-11 14:08:42.740665] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.857 [2024-06-11 14:08:42.740701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.857 [2024-06-11 14:08:42.740712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.857 [2024-06-11 14:08:42.740717] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.857 [2024-06-11 14:08:42.740722] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:49.857 [2024-06-11 14:08:42.749944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:49.857 qpair failed and we were unable to recover it. 00:40:49.857 [2024-06-11 14:08:42.760736] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:49.857 [2024-06-11 14:08:42.760766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:49.857 [2024-06-11 14:08:42.760776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:49.857 [2024-06-11 14:08:42.760781] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:49.857 [2024-06-11 14:08:42.760786] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.118 [2024-06-11 14:08:42.769981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.118 qpair failed and we were unable to recover it. 00:40:50.118 [2024-06-11 14:08:42.781118] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.118 [2024-06-11 14:08:42.781155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.118 [2024-06-11 14:08:42.781175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.118 [2024-06-11 14:08:42.781180] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.118 [2024-06-11 14:08:42.781186] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.118 [2024-06-11 14:08:42.790062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.118 qpair failed and we were unable to recover it. 00:40:50.118 [2024-06-11 14:08:42.800329] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.118 [2024-06-11 14:08:42.800358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.118 [2024-06-11 14:08:42.800369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.118 [2024-06-11 14:08:42.800374] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.118 [2024-06-11 14:08:42.800378] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.118 [2024-06-11 14:08:42.810101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.118 qpair failed and we were unable to recover it. 00:40:50.118 [2024-06-11 14:08:42.821060] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.118 [2024-06-11 14:08:42.821101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.118 [2024-06-11 14:08:42.821121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.118 [2024-06-11 14:08:42.821127] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.118 [2024-06-11 14:08:42.821132] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.118 [2024-06-11 14:08:42.830332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.118 qpair failed and we were unable to recover it. 00:40:50.118 [2024-06-11 14:08:42.840833] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.118 [2024-06-11 14:08:42.840862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.118 [2024-06-11 14:08:42.840873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.118 [2024-06-11 14:08:42.840878] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.118 [2024-06-11 14:08:42.840882] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.118 [2024-06-11 14:08:42.850228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.118 qpair failed and we were unable to recover it. 00:40:50.118 [2024-06-11 14:08:42.861079] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.118 [2024-06-11 14:08:42.861112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.118 [2024-06-11 14:08:42.861136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.119 [2024-06-11 14:08:42.861142] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.119 [2024-06-11 14:08:42.861147] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.119 [2024-06-11 14:08:42.870318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.119 qpair failed and we were unable to recover it. 00:40:50.119 [2024-06-11 14:08:42.880560] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.119 [2024-06-11 14:08:42.880590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.119 [2024-06-11 14:08:42.880601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.119 [2024-06-11 14:08:42.880606] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.119 [2024-06-11 14:08:42.880610] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.119 [2024-06-11 14:08:42.890529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.119 qpair failed and we were unable to recover it. 00:40:50.119 [2024-06-11 14:08:42.901021] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.119 [2024-06-11 14:08:42.901048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.119 [2024-06-11 14:08:42.901058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.119 [2024-06-11 14:08:42.901063] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.119 [2024-06-11 14:08:42.901067] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.119 [2024-06-11 14:08:42.910382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.119 qpair failed and we were unable to recover it. 00:40:50.119 [2024-06-11 14:08:42.921072] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.119 [2024-06-11 14:08:42.921099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.119 [2024-06-11 14:08:42.921109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.119 [2024-06-11 14:08:42.921114] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.119 [2024-06-11 14:08:42.921118] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.119 [2024-06-11 14:08:42.930437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.119 qpair failed and we were unable to recover it. 00:40:50.119 [2024-06-11 14:08:42.941158] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.119 [2024-06-11 14:08:42.941183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.119 [2024-06-11 14:08:42.941193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.119 [2024-06-11 14:08:42.941198] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.119 [2024-06-11 14:08:42.941205] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.119 [2024-06-11 14:08:42.950594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.119 qpair failed and we were unable to recover it. 00:40:50.119 [2024-06-11 14:08:42.961025] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.119 [2024-06-11 14:08:42.961056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.119 [2024-06-11 14:08:42.961076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.119 [2024-06-11 14:08:42.961082] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.119 [2024-06-11 14:08:42.961087] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.119 [2024-06-11 14:08:42.970675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.119 qpair failed and we were unable to recover it. 00:40:50.119 [2024-06-11 14:08:42.981198] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.119 [2024-06-11 14:08:42.981231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.119 [2024-06-11 14:08:42.981242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.119 [2024-06-11 14:08:42.981247] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.119 [2024-06-11 14:08:42.981252] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.119 [2024-06-11 14:08:42.990653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.119 qpair failed and we were unable to recover it. 00:40:50.119 [2024-06-11 14:08:43.001291] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.119 [2024-06-11 14:08:43.001318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.119 [2024-06-11 14:08:43.001328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.119 [2024-06-11 14:08:43.001333] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.119 [2024-06-11 14:08:43.001338] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.119 [2024-06-11 14:08:43.010745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.119 qpair failed and we were unable to recover it. 00:40:50.119 [2024-06-11 14:08:43.021444] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.119 [2024-06-11 14:08:43.021478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.119 [2024-06-11 14:08:43.021498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.119 [2024-06-11 14:08:43.021504] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.119 [2024-06-11 14:08:43.021509] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.380 [2024-06-11 14:08:43.030849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.380 qpair failed and we were unable to recover it. 00:40:50.380 [2024-06-11 14:08:43.041168] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.380 [2024-06-11 14:08:43.041196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.380 [2024-06-11 14:08:43.041207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.380 [2024-06-11 14:08:43.041212] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.380 [2024-06-11 14:08:43.041217] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.380 [2024-06-11 14:08:43.050843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.380 qpair failed and we were unable to recover it. 00:40:50.380 [2024-06-11 14:08:43.061544] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.380 [2024-06-11 14:08:43.061578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.380 [2024-06-11 14:08:43.061587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.380 [2024-06-11 14:08:43.061592] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.380 [2024-06-11 14:08:43.061597] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.380 [2024-06-11 14:08:43.070741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.380 qpair failed and we were unable to recover it. 00:40:50.380 [2024-06-11 14:08:43.081714] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.381 [2024-06-11 14:08:43.081745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.381 [2024-06-11 14:08:43.081765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.381 [2024-06-11 14:08:43.081771] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.381 [2024-06-11 14:08:43.081777] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.381 [2024-06-11 14:08:43.090739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.381 qpair failed and we were unable to recover it. 00:40:50.381 [2024-06-11 14:08:43.101752] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.381 [2024-06-11 14:08:43.101779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.381 [2024-06-11 14:08:43.101799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.381 [2024-06-11 14:08:43.101805] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.381 [2024-06-11 14:08:43.101810] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.381 [2024-06-11 14:08:43.110803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.381 qpair failed and we were unable to recover it. 00:40:50.381 [2024-06-11 14:08:43.121042] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.381 [2024-06-11 14:08:43.121072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.381 [2024-06-11 14:08:43.121083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.381 [2024-06-11 14:08:43.121091] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.381 [2024-06-11 14:08:43.121095] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.381 [2024-06-11 14:08:43.130978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.381 qpair failed and we were unable to recover it. 00:40:50.381 [2024-06-11 14:08:43.141723] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.381 [2024-06-11 14:08:43.141758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.381 [2024-06-11 14:08:43.141768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.381 [2024-06-11 14:08:43.141773] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.381 [2024-06-11 14:08:43.141778] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.381 [2024-06-11 14:08:43.151083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.381 qpair failed and we were unable to recover it. 00:40:50.381 [2024-06-11 14:08:43.161762] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.381 [2024-06-11 14:08:43.161790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.381 [2024-06-11 14:08:43.161799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.381 [2024-06-11 14:08:43.161804] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.381 [2024-06-11 14:08:43.161809] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.381 [2024-06-11 14:08:43.171215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.381 qpair failed and we were unable to recover it. 00:40:50.381 [2024-06-11 14:08:43.182102] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.381 [2024-06-11 14:08:43.182131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.381 [2024-06-11 14:08:43.182140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.381 [2024-06-11 14:08:43.182145] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.381 [2024-06-11 14:08:43.182150] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.381 [2024-06-11 14:08:43.191260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.381 qpair failed and we were unable to recover it. 00:40:50.381 [2024-06-11 14:08:43.201538] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.381 [2024-06-11 14:08:43.201568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.381 [2024-06-11 14:08:43.201577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.381 [2024-06-11 14:08:43.201582] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.381 [2024-06-11 14:08:43.201588] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.381 [2024-06-11 14:08:43.211193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.381 qpair failed and we were unable to recover it. 00:40:50.381 [2024-06-11 14:08:43.222070] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.381 [2024-06-11 14:08:43.222100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.381 [2024-06-11 14:08:43.222120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.381 [2024-06-11 14:08:43.222126] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.381 [2024-06-11 14:08:43.222130] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.381 [2024-06-11 14:08:43.231525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.381 qpair failed and we were unable to recover it. 00:40:50.381 [2024-06-11 14:08:43.241984] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.381 [2024-06-11 14:08:43.242015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.381 [2024-06-11 14:08:43.242028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.381 [2024-06-11 14:08:43.242034] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.381 [2024-06-11 14:08:43.242038] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.381 [2024-06-11 14:08:43.251360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.381 qpair failed and we were unable to recover it. 00:40:50.381 [2024-06-11 14:08:43.262196] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.381 [2024-06-11 14:08:43.262228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.381 [2024-06-11 14:08:43.262248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.381 [2024-06-11 14:08:43.262254] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.381 [2024-06-11 14:08:43.262259] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.381 [2024-06-11 14:08:43.271614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.381 qpair failed and we were unable to recover it. 00:40:50.381 [2024-06-11 14:08:43.281551] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.381 [2024-06-11 14:08:43.281581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.381 [2024-06-11 14:08:43.281592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.381 [2024-06-11 14:08:43.281597] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.381 [2024-06-11 14:08:43.281602] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.642 [2024-06-11 14:08:43.291569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.642 qpair failed and we were unable to recover it. 00:40:50.642 [2024-06-11 14:08:43.302170] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.642 [2024-06-11 14:08:43.302202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.642 [2024-06-11 14:08:43.302215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.642 [2024-06-11 14:08:43.302220] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.642 [2024-06-11 14:08:43.302224] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.642 [2024-06-11 14:08:43.311525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.642 qpair failed and we were unable to recover it. 00:40:50.642 [2024-06-11 14:08:43.322270] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.642 [2024-06-11 14:08:43.322301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.642 [2024-06-11 14:08:43.322310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.642 [2024-06-11 14:08:43.322315] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.642 [2024-06-11 14:08:43.322319] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.331535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.643 [2024-06-11 14:08:43.342233] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.643 [2024-06-11 14:08:43.342263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.643 [2024-06-11 14:08:43.342273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.643 [2024-06-11 14:08:43.342278] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.643 [2024-06-11 14:08:43.342283] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.351490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.643 [2024-06-11 14:08:43.361975] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.643 [2024-06-11 14:08:43.362005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.643 [2024-06-11 14:08:43.362014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.643 [2024-06-11 14:08:43.362023] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.643 [2024-06-11 14:08:43.362027] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.371602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.643 [2024-06-11 14:08:43.382437] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.643 [2024-06-11 14:08:43.382469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.643 [2024-06-11 14:08:43.382489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.643 [2024-06-11 14:08:43.382495] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.643 [2024-06-11 14:08:43.382504] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.391598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.643 [2024-06-11 14:08:43.402147] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.643 [2024-06-11 14:08:43.402177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.643 [2024-06-11 14:08:43.402188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.643 [2024-06-11 14:08:43.402194] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.643 [2024-06-11 14:08:43.402198] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.411593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.643 [2024-06-11 14:08:43.422451] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.643 [2024-06-11 14:08:43.422478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.643 [2024-06-11 14:08:43.422488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.643 [2024-06-11 14:08:43.422493] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.643 [2024-06-11 14:08:43.422497] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.432057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.643 [2024-06-11 14:08:43.442068] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.643 [2024-06-11 14:08:43.442097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.643 [2024-06-11 14:08:43.442107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.643 [2024-06-11 14:08:43.442111] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.643 [2024-06-11 14:08:43.442116] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.451591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.643 [2024-06-11 14:08:43.462671] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.643 [2024-06-11 14:08:43.462704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.643 [2024-06-11 14:08:43.462714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.643 [2024-06-11 14:08:43.462719] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.643 [2024-06-11 14:08:43.462724] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.471849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.643 [2024-06-11 14:08:43.482584] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.643 [2024-06-11 14:08:43.482622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.643 [2024-06-11 14:08:43.482643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.643 [2024-06-11 14:08:43.482648] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.643 [2024-06-11 14:08:43.482654] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.492137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.643 [2024-06-11 14:08:43.502113] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.643 [2024-06-11 14:08:43.502154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.643 [2024-06-11 14:08:43.502174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.643 [2024-06-11 14:08:43.502180] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.643 [2024-06-11 14:08:43.502186] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.511987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.643 [2024-06-11 14:08:43.521910] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.643 [2024-06-11 14:08:43.521938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.643 [2024-06-11 14:08:43.521949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.643 [2024-06-11 14:08:43.521955] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.643 [2024-06-11 14:08:43.521960] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.531658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.643 [2024-06-11 14:08:43.542750] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.643 [2024-06-11 14:08:43.542784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.643 [2024-06-11 14:08:43.542794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.643 [2024-06-11 14:08:43.542800] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.643 [2024-06-11 14:08:43.542804] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.643 [2024-06-11 14:08:43.552371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.643 qpair failed and we were unable to recover it. 00:40:50.905 [2024-06-11 14:08:43.562309] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.905 [2024-06-11 14:08:43.562343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.905 [2024-06-11 14:08:43.562353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.905 [2024-06-11 14:08:43.562361] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.905 [2024-06-11 14:08:43.562366] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.905 [2024-06-11 14:08:43.572230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.905 qpair failed and we were unable to recover it. 00:40:50.905 [2024-06-11 14:08:43.582600] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.905 [2024-06-11 14:08:43.582632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.905 [2024-06-11 14:08:43.582643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.905 [2024-06-11 14:08:43.582648] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.905 [2024-06-11 14:08:43.582652] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.905 [2024-06-11 14:08:43.592399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.905 qpair failed and we were unable to recover it. 00:40:50.905 [2024-06-11 14:08:43.602580] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.905 [2024-06-11 14:08:43.602610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.905 [2024-06-11 14:08:43.602619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.905 [2024-06-11 14:08:43.602624] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.905 [2024-06-11 14:08:43.602629] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.905 [2024-06-11 14:08:43.612130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.905 qpair failed and we were unable to recover it. 00:40:50.905 [2024-06-11 14:08:43.622679] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.905 [2024-06-11 14:08:43.622711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.905 [2024-06-11 14:08:43.622731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.905 [2024-06-11 14:08:43.622737] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.905 [2024-06-11 14:08:43.622742] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.905 [2024-06-11 14:08:43.632239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.905 qpair failed and we were unable to recover it. 00:40:50.905 [2024-06-11 14:08:43.642624] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.905 [2024-06-11 14:08:43.642651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.905 [2024-06-11 14:08:43.642662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.905 [2024-06-11 14:08:43.642667] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.905 [2024-06-11 14:08:43.642672] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.905 [2024-06-11 14:08:43.652239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.905 qpair failed and we were unable to recover it. 00:40:50.905 [2024-06-11 14:08:43.663006] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.905 [2024-06-11 14:08:43.663042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.905 [2024-06-11 14:08:43.663052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.905 [2024-06-11 14:08:43.663057] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.905 [2024-06-11 14:08:43.663062] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.906 [2024-06-11 14:08:43.672561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.906 qpair failed and we were unable to recover it. 00:40:50.906 [2024-06-11 14:08:43.682786] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.906 [2024-06-11 14:08:43.682814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.906 [2024-06-11 14:08:43.682823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.906 [2024-06-11 14:08:43.682828] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.906 [2024-06-11 14:08:43.682833] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.906 [2024-06-11 14:08:43.692471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.906 qpair failed and we were unable to recover it. 00:40:50.906 [2024-06-11 14:08:43.702779] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.906 [2024-06-11 14:08:43.702810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.906 [2024-06-11 14:08:43.702819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.906 [2024-06-11 14:08:43.702824] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.906 [2024-06-11 14:08:43.702829] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.906 [2024-06-11 14:08:43.712541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.906 qpair failed and we were unable to recover it. 00:40:50.906 [2024-06-11 14:08:43.722910] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.906 [2024-06-11 14:08:43.722942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.906 [2024-06-11 14:08:43.722951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.906 [2024-06-11 14:08:43.722956] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.906 [2024-06-11 14:08:43.722960] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.906 [2024-06-11 14:08:43.732510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.906 qpair failed and we were unable to recover it. 00:40:50.906 [2024-06-11 14:08:43.742973] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.906 [2024-06-11 14:08:43.743002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.906 [2024-06-11 14:08:43.743014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.906 [2024-06-11 14:08:43.743022] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.906 [2024-06-11 14:08:43.743026] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.906 [2024-06-11 14:08:43.752537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.906 qpair failed and we were unable to recover it. 00:40:50.906 [2024-06-11 14:08:43.762978] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.906 [2024-06-11 14:08:43.763003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.906 [2024-06-11 14:08:43.763013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.906 [2024-06-11 14:08:43.763021] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.906 [2024-06-11 14:08:43.763026] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.906 [2024-06-11 14:08:43.772637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.906 qpair failed and we were unable to recover it. 00:40:50.906 [2024-06-11 14:08:43.783390] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.906 [2024-06-11 14:08:43.783422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.906 [2024-06-11 14:08:43.783432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.906 [2024-06-11 14:08:43.783437] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.906 [2024-06-11 14:08:43.783441] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.906 [2024-06-11 14:08:43.792327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.906 qpair failed and we were unable to recover it. 00:40:50.906 [2024-06-11 14:08:43.802588] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:50.906 [2024-06-11 14:08:43.802614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:50.906 [2024-06-11 14:08:43.802624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:50.906 [2024-06-11 14:08:43.802629] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:50.906 [2024-06-11 14:08:43.802633] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:50.906 [2024-06-11 14:08:43.812896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:50.906 qpair failed and we were unable to recover it. 00:40:51.167 [2024-06-11 14:08:43.823571] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.167 [2024-06-11 14:08:43.823604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.167 [2024-06-11 14:08:43.823624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.167 [2024-06-11 14:08:43.823629] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.167 [2024-06-11 14:08:43.823637] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.167 [2024-06-11 14:08:43.832838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.167 qpair failed and we were unable to recover it. 00:40:51.167 [2024-06-11 14:08:43.843349] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.167 [2024-06-11 14:08:43.843376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.167 [2024-06-11 14:08:43.843388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.167 [2024-06-11 14:08:43.843392] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.167 [2024-06-11 14:08:43.843397] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.167 [2024-06-11 14:08:43.853214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.167 qpair failed and we were unable to recover it. 00:40:51.167 [2024-06-11 14:08:43.863606] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.167 [2024-06-11 14:08:43.863638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.167 [2024-06-11 14:08:43.863648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.167 [2024-06-11 14:08:43.863652] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.167 [2024-06-11 14:08:43.863657] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.167 [2024-06-11 14:08:43.872646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.167 qpair failed and we were unable to recover it. 00:40:51.167 [2024-06-11 14:08:43.883215] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.167 [2024-06-11 14:08:43.883243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.167 [2024-06-11 14:08:43.883252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.167 [2024-06-11 14:08:43.883257] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.167 [2024-06-11 14:08:43.883261] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.167 [2024-06-11 14:08:43.892967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.167 qpair failed and we were unable to recover it. 00:40:51.167 [2024-06-11 14:08:43.903713] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.167 [2024-06-11 14:08:43.903742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.167 [2024-06-11 14:08:43.903751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.167 [2024-06-11 14:08:43.903756] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.167 [2024-06-11 14:08:43.903760] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.167 [2024-06-11 14:08:43.912947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.167 qpair failed and we were unable to recover it. 00:40:51.167 [2024-06-11 14:08:43.923466] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.167 [2024-06-11 14:08:43.923493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.167 [2024-06-11 14:08:43.923502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.167 [2024-06-11 14:08:43.923507] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.167 [2024-06-11 14:08:43.923512] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.167 [2024-06-11 14:08:43.933200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.167 qpair failed and we were unable to recover it. 00:40:51.167 [2024-06-11 14:08:43.943668] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.167 [2024-06-11 14:08:43.943702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.167 [2024-06-11 14:08:43.943712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.167 [2024-06-11 14:08:43.943717] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.167 [2024-06-11 14:08:43.943722] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.167 [2024-06-11 14:08:43.953246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.167 qpair failed and we were unable to recover it. 00:40:51.167 [2024-06-11 14:08:43.963182] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.167 [2024-06-11 14:08:43.963216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.167 [2024-06-11 14:08:43.963226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.167 [2024-06-11 14:08:43.963231] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.167 [2024-06-11 14:08:43.963235] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.167 [2024-06-11 14:08:43.973241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.168 qpair failed and we were unable to recover it. 00:40:51.168 [2024-06-11 14:08:43.983826] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.168 [2024-06-11 14:08:43.983853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.168 [2024-06-11 14:08:43.983863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.168 [2024-06-11 14:08:43.983868] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.168 [2024-06-11 14:08:43.983872] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.168 [2024-06-11 14:08:43.993495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.168 qpair failed and we were unable to recover it. 00:40:51.168 [2024-06-11 14:08:44.003589] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.168 [2024-06-11 14:08:44.003615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.168 [2024-06-11 14:08:44.003625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.168 [2024-06-11 14:08:44.003632] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.168 [2024-06-11 14:08:44.003637] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.168 [2024-06-11 14:08:44.013399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.168 qpair failed and we were unable to recover it. 00:40:51.168 [2024-06-11 14:08:44.024056] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.168 [2024-06-11 14:08:44.024090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.168 [2024-06-11 14:08:44.024099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.168 [2024-06-11 14:08:44.024104] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.168 [2024-06-11 14:08:44.024109] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.168 [2024-06-11 14:08:44.033571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.168 qpair failed and we were unable to recover it. 00:40:51.168 [2024-06-11 14:08:44.043297] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.168 [2024-06-11 14:08:44.043322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.168 [2024-06-11 14:08:44.043332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.168 [2024-06-11 14:08:44.043337] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.168 [2024-06-11 14:08:44.043341] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.168 [2024-06-11 14:08:44.053395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.168 qpair failed and we were unable to recover it. 00:40:51.168 [2024-06-11 14:08:44.063996] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.168 [2024-06-11 14:08:44.064034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.168 [2024-06-11 14:08:44.064044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.168 [2024-06-11 14:08:44.064048] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.168 [2024-06-11 14:08:44.064053] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.168 [2024-06-11 14:08:44.073457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.168 qpair failed and we were unable to recover it. 00:40:51.429 [2024-06-11 14:08:44.083872] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.429 [2024-06-11 14:08:44.083899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.429 [2024-06-11 14:08:44.083908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.429 [2024-06-11 14:08:44.083913] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.429 [2024-06-11 14:08:44.083917] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.429 [2024-06-11 14:08:44.093646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.429 qpair failed and we were unable to recover it. 00:40:51.429 [2024-06-11 14:08:44.104232] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.429 [2024-06-11 14:08:44.104265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.429 [2024-06-11 14:08:44.104274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.429 [2024-06-11 14:08:44.104279] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.429 [2024-06-11 14:08:44.104283] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.429 [2024-06-11 14:08:44.113830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.429 qpair failed and we were unable to recover it. 00:40:51.429 [2024-06-11 14:08:44.123989] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.429 [2024-06-11 14:08:44.124013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.429 [2024-06-11 14:08:44.124025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.429 [2024-06-11 14:08:44.124030] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.429 [2024-06-11 14:08:44.124034] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.429 [2024-06-11 14:08:44.133713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.429 qpair failed and we were unable to recover it. 00:40:51.429 [2024-06-11 14:08:44.144441] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.429 [2024-06-11 14:08:44.144466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.429 [2024-06-11 14:08:44.144476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.429 [2024-06-11 14:08:44.144480] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.429 [2024-06-11 14:08:44.144484] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.429 [2024-06-11 14:08:44.153884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.429 qpair failed and we were unable to recover it. 00:40:51.429 [2024-06-11 14:08:44.164263] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.429 [2024-06-11 14:08:44.164290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.429 [2024-06-11 14:08:44.164299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.429 [2024-06-11 14:08:44.164304] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.429 [2024-06-11 14:08:44.164310] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.429 [2024-06-11 14:08:44.174067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.429 qpair failed and we were unable to recover it. 00:40:51.429 [2024-06-11 14:08:44.184334] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.429 [2024-06-11 14:08:44.184363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.429 [2024-06-11 14:08:44.184375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.429 [2024-06-11 14:08:44.184380] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.429 [2024-06-11 14:08:44.184384] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.429 [2024-06-11 14:08:44.194139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.429 qpair failed and we were unable to recover it. 00:40:51.429 [2024-06-11 14:08:44.204184] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.429 [2024-06-11 14:08:44.204217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.429 [2024-06-11 14:08:44.204227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.429 [2024-06-11 14:08:44.204232] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.429 [2024-06-11 14:08:44.204236] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.429 [2024-06-11 14:08:44.213944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.429 qpair failed and we were unable to recover it. 00:40:51.429 [2024-06-11 14:08:44.224548] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.429 [2024-06-11 14:08:44.224577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.429 [2024-06-11 14:08:44.224587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.429 [2024-06-11 14:08:44.224591] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.429 [2024-06-11 14:08:44.224596] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.429 [2024-06-11 14:08:44.234211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.429 qpair failed and we were unable to recover it. 00:40:51.429 [2024-06-11 14:08:44.244392] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.429 [2024-06-11 14:08:44.244418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.429 [2024-06-11 14:08:44.244428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.429 [2024-06-11 14:08:44.244433] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.429 [2024-06-11 14:08:44.244437] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.430 [2024-06-11 14:08:44.254061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.430 qpair failed and we were unable to recover it. 00:40:51.430 [2024-06-11 14:08:44.264672] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.430 [2024-06-11 14:08:44.264701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.430 [2024-06-11 14:08:44.264710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.430 [2024-06-11 14:08:44.264715] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.430 [2024-06-11 14:08:44.264722] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.430 [2024-06-11 14:08:44.274086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.430 qpair failed and we were unable to recover it. 00:40:51.430 [2024-06-11 14:08:44.284629] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.430 [2024-06-11 14:08:44.284656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.430 [2024-06-11 14:08:44.284676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.430 [2024-06-11 14:08:44.284682] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.430 [2024-06-11 14:08:44.284686] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.430 [2024-06-11 14:08:44.294186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.430 qpair failed and we were unable to recover it. 00:40:51.430 [2024-06-11 14:08:44.304878] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.430 [2024-06-11 14:08:44.304913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.430 [2024-06-11 14:08:44.304933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.430 [2024-06-11 14:08:44.304939] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.430 [2024-06-11 14:08:44.304943] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.430 [2024-06-11 14:08:44.314361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.430 qpair failed and we were unable to recover it. 00:40:51.430 [2024-06-11 14:08:44.324577] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.430 [2024-06-11 14:08:44.324608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.430 [2024-06-11 14:08:44.324619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.430 [2024-06-11 14:08:44.324624] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.430 [2024-06-11 14:08:44.324628] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.430 [2024-06-11 14:08:44.334329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.430 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.345221] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.345249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.691 [2024-06-11 14:08:44.345259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.691 [2024-06-11 14:08:44.345264] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.691 [2024-06-11 14:08:44.345268] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.691 [2024-06-11 14:08:44.354319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.691 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.364625] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.364659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.691 [2024-06-11 14:08:44.364668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.691 [2024-06-11 14:08:44.364673] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.691 [2024-06-11 14:08:44.364678] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.691 [2024-06-11 14:08:44.374672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.691 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.385059] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.385097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.691 [2024-06-11 14:08:44.385117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.691 [2024-06-11 14:08:44.385123] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.691 [2024-06-11 14:08:44.385128] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.691 [2024-06-11 14:08:44.394462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.691 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.404740] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.404770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.691 [2024-06-11 14:08:44.404781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.691 [2024-06-11 14:08:44.404786] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.691 [2024-06-11 14:08:44.404790] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.691 [2024-06-11 14:08:44.414675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.691 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.425025] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.425052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.691 [2024-06-11 14:08:44.425062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.691 [2024-06-11 14:08:44.425067] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.691 [2024-06-11 14:08:44.425071] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.691 [2024-06-11 14:08:44.434672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.691 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.444807] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.444830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.691 [2024-06-11 14:08:44.444840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.691 [2024-06-11 14:08:44.444847] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.691 [2024-06-11 14:08:44.444851] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.691 [2024-06-11 14:08:44.454622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.691 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.465336] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.465362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.691 [2024-06-11 14:08:44.465371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.691 [2024-06-11 14:08:44.465376] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.691 [2024-06-11 14:08:44.465380] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.691 [2024-06-11 14:08:44.474752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.691 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.485071] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.485099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.691 [2024-06-11 14:08:44.485108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.691 [2024-06-11 14:08:44.485113] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.691 [2024-06-11 14:08:44.485117] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.691 [2024-06-11 14:08:44.494802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.691 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.505417] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.505451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.691 [2024-06-11 14:08:44.505461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.691 [2024-06-11 14:08:44.505465] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.691 [2024-06-11 14:08:44.505469] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.691 [2024-06-11 14:08:44.515075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.691 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.525499] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.525529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.691 [2024-06-11 14:08:44.525539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.691 [2024-06-11 14:08:44.525544] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.691 [2024-06-11 14:08:44.525548] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.691 [2024-06-11 14:08:44.534939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.691 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.545551] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.545575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.691 [2024-06-11 14:08:44.545585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.691 [2024-06-11 14:08:44.545590] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.691 [2024-06-11 14:08:44.545594] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.691 [2024-06-11 14:08:44.555135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.691 qpair failed and we were unable to recover it. 00:40:51.691 [2024-06-11 14:08:44.565184] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.691 [2024-06-11 14:08:44.565209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.692 [2024-06-11 14:08:44.565218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.692 [2024-06-11 14:08:44.565223] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.692 [2024-06-11 14:08:44.565227] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.692 [2024-06-11 14:08:44.575058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.692 qpair failed and we were unable to recover it. 00:40:51.692 [2024-06-11 14:08:44.585706] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.692 [2024-06-11 14:08:44.585736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.692 [2024-06-11 14:08:44.585756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.692 [2024-06-11 14:08:44.585761] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.692 [2024-06-11 14:08:44.585766] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.692 [2024-06-11 14:08:44.595062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.692 qpair failed and we were unable to recover it. 00:40:51.953 [2024-06-11 14:08:44.605808] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.953 [2024-06-11 14:08:44.605840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.953 [2024-06-11 14:08:44.605852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.953 [2024-06-11 14:08:44.605857] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.953 [2024-06-11 14:08:44.605861] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.953 [2024-06-11 14:08:44.615117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.953 qpair failed and we were unable to recover it. 00:40:51.953 [2024-06-11 14:08:44.625831] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.953 [2024-06-11 14:08:44.625856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.625869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.625873] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.625878] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.635357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:51.954 [2024-06-11 14:08:44.645592] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.954 [2024-06-11 14:08:44.645619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.645629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.645633] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.645637] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.655118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:51.954 [2024-06-11 14:08:44.665886] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.954 [2024-06-11 14:08:44.665916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.665926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.665930] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.665934] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.675275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:51.954 [2024-06-11 14:08:44.685766] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.954 [2024-06-11 14:08:44.685794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.685803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.685809] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.685813] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.695133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:51.954 [2024-06-11 14:08:44.706029] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.954 [2024-06-11 14:08:44.706059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.706068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.706073] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.706079] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.715357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:51.954 [2024-06-11 14:08:44.725808] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.954 [2024-06-11 14:08:44.725836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.725845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.725850] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.725854] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.735470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:51.954 [2024-06-11 14:08:44.745961] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.954 [2024-06-11 14:08:44.746000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.746009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.746014] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.746021] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.755490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:51.954 [2024-06-11 14:08:44.765899] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.954 [2024-06-11 14:08:44.765923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.765932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.765937] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.765941] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.775401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:51.954 [2024-06-11 14:08:44.786187] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.954 [2024-06-11 14:08:44.786216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.786225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.786230] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.786234] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.795775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:51.954 [2024-06-11 14:08:44.805860] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.954 [2024-06-11 14:08:44.805892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.805902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.805906] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.805911] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.815641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:51.954 [2024-06-11 14:08:44.826241] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.954 [2024-06-11 14:08:44.826272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.826282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.826286] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.826290] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.835647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:51.954 [2024-06-11 14:08:44.846442] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:51.954 [2024-06-11 14:08:44.846470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:51.954 [2024-06-11 14:08:44.846479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:51.954 [2024-06-11 14:08:44.846484] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:51.954 [2024-06-11 14:08:44.846488] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:51.954 [2024-06-11 14:08:44.856044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:51.954 qpair failed and we were unable to recover it. 00:40:52.215 [2024-06-11 14:08:44.866385] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.215 [2024-06-11 14:08:44.866415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.215 [2024-06-11 14:08:44.866424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.215 [2024-06-11 14:08:44.866429] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.215 [2024-06-11 14:08:44.866433] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.215 [2024-06-11 14:08:44.875916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.215 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:44.886231] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:44.886258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:44.886267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:44.886274] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:44.886278] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:44.895955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:44.906683] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:44.906712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:44.906721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:44.906726] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:44.906730] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:44.915977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:44.926486] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:44.926514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:44.926523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:44.926528] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:44.926532] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:44.936103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:44.946781] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:44.946812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:44.946821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:44.946826] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:44.946830] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:44.956023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:44.966411] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:44.966437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:44.966446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:44.966451] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:44.966455] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:44.976177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:44.986914] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:44.986941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:44.986950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:44.986955] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:44.986959] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:44.996116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:45.006716] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:45.006746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:45.006755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:45.006760] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:45.006765] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:45.016458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:45.026655] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:45.026689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:45.026699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:45.026704] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:45.026708] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:45.036211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:45.046535] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:45.046561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:45.046572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:45.046578] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:45.046582] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:45.056328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:45.066969] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:45.067000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:45.067012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:45.067019] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:45.067024] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:45.076475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:45.086886] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:45.086916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:45.086926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:45.086931] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:45.086935] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:45.096548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.216 [2024-06-11 14:08:45.107135] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.216 [2024-06-11 14:08:45.107164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.216 [2024-06-11 14:08:45.107173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.216 [2024-06-11 14:08:45.107178] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.216 [2024-06-11 14:08:45.107182] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.216 [2024-06-11 14:08:45.116715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.216 qpair failed and we were unable to recover it. 00:40:52.478 [2024-06-11 14:08:45.126959] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.478 [2024-06-11 14:08:45.126988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.478 [2024-06-11 14:08:45.126997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.478 [2024-06-11 14:08:45.127002] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.478 [2024-06-11 14:08:45.127007] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.478 [2024-06-11 14:08:45.136576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.478 qpair failed and we were unable to recover it. 00:40:52.478 [2024-06-11 14:08:45.147377] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.478 [2024-06-11 14:08:45.147411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.478 [2024-06-11 14:08:45.147420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.478 [2024-06-11 14:08:45.147425] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.478 [2024-06-11 14:08:45.147432] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.478 [2024-06-11 14:08:45.156850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.478 qpair failed and we were unable to recover it. 00:40:52.478 [2024-06-11 14:08:45.167442] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.478 [2024-06-11 14:08:45.167476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.478 [2024-06-11 14:08:45.167485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.167490] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.167494] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.479 [2024-06-11 14:08:45.176548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.479 qpair failed and we were unable to recover it. 00:40:52.479 [2024-06-11 14:08:45.187509] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.479 [2024-06-11 14:08:45.187539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.479 [2024-06-11 14:08:45.187548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.187553] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.187557] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.479 [2024-06-11 14:08:45.196874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.479 qpair failed and we were unable to recover it. 00:40:52.479 [2024-06-11 14:08:45.207154] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.479 [2024-06-11 14:08:45.207182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.479 [2024-06-11 14:08:45.207191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.207195] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.207200] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.479 [2024-06-11 14:08:45.216711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.479 qpair failed and we were unable to recover it. 00:40:52.479 [2024-06-11 14:08:45.226732] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.479 [2024-06-11 14:08:45.226761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.479 [2024-06-11 14:08:45.226771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.226775] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.226780] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.479 [2024-06-11 14:08:45.236939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.479 qpair failed and we were unable to recover it. 00:40:52.479 [2024-06-11 14:08:45.247453] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.479 [2024-06-11 14:08:45.247480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.479 [2024-06-11 14:08:45.247490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.247495] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.247499] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.479 [2024-06-11 14:08:45.257243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.479 qpair failed and we were unable to recover it. 00:40:52.479 [2024-06-11 14:08:45.266939] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.479 [2024-06-11 14:08:45.266965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.479 [2024-06-11 14:08:45.266975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.266979] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.266983] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.479 [2024-06-11 14:08:45.276816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.479 qpair failed and we were unable to recover it. 00:40:52.479 [2024-06-11 14:08:45.287181] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.479 [2024-06-11 14:08:45.287210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.479 [2024-06-11 14:08:45.287220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.287224] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.287228] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.479 [2024-06-11 14:08:45.297133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.479 qpair failed and we were unable to recover it. 00:40:52.479 [2024-06-11 14:08:45.307545] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.479 [2024-06-11 14:08:45.307579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.479 [2024-06-11 14:08:45.307599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.307604] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.307609] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.479 [2024-06-11 14:08:45.317062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.479 qpair failed and we were unable to recover it. 00:40:52.479 [2024-06-11 14:08:45.327753] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.479 [2024-06-11 14:08:45.327786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.479 [2024-06-11 14:08:45.327797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.327805] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.327809] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.479 [2024-06-11 14:08:45.337380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.479 qpair failed and we were unable to recover it. 00:40:52.479 [2024-06-11 14:08:45.347754] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.479 [2024-06-11 14:08:45.347790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.479 [2024-06-11 14:08:45.347800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.347805] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.347809] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.479 [2024-06-11 14:08:45.357272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.479 qpair failed and we were unable to recover it. 00:40:52.479 [2024-06-11 14:08:45.366963] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.479 [2024-06-11 14:08:45.366992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.479 [2024-06-11 14:08:45.367001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.367006] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.367010] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.479 [2024-06-11 14:08:45.377353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.479 qpair failed and we were unable to recover it. 00:40:52.479 [2024-06-11 14:08:45.388148] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.479 [2024-06-11 14:08:45.388180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.479 [2024-06-11 14:08:45.388190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.479 [2024-06-11 14:08:45.388194] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.479 [2024-06-11 14:08:45.388199] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.741 [2024-06-11 14:08:45.397112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.741 qpair failed and we were unable to recover it. 00:40:52.741 [2024-06-11 14:08:45.407807] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.741 [2024-06-11 14:08:45.407839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.741 [2024-06-11 14:08:45.407849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.741 [2024-06-11 14:08:45.407853] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.741 [2024-06-11 14:08:45.407857] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.741 [2024-06-11 14:08:45.417197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.741 qpair failed and we were unable to recover it. 00:40:52.741 [2024-06-11 14:08:45.428102] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.741 [2024-06-11 14:08:45.428133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.741 [2024-06-11 14:08:45.428143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.741 [2024-06-11 14:08:45.428148] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.741 [2024-06-11 14:08:45.428152] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.741 [2024-06-11 14:08:45.437562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.741 qpair failed and we were unable to recover it. 00:40:52.741 [2024-06-11 14:08:45.447785] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.741 [2024-06-11 14:08:45.447813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.741 [2024-06-11 14:08:45.447823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.741 [2024-06-11 14:08:45.447828] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.741 [2024-06-11 14:08:45.447832] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.741 [2024-06-11 14:08:45.457367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.741 qpair failed and we were unable to recover it. 00:40:52.741 [2024-06-11 14:08:45.468123] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.741 [2024-06-11 14:08:45.468156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.741 [2024-06-11 14:08:45.468165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.741 [2024-06-11 14:08:45.468170] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.741 [2024-06-11 14:08:45.468174] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.741 [2024-06-11 14:08:45.477506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.741 qpair failed and we were unable to recover it. 00:40:52.741 [2024-06-11 14:08:45.488124] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.741 [2024-06-11 14:08:45.488158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.741 [2024-06-11 14:08:45.488167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.741 [2024-06-11 14:08:45.488172] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.741 [2024-06-11 14:08:45.488176] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.741 [2024-06-11 14:08:45.497633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.741 qpair failed and we were unable to recover it. 00:40:52.741 [2024-06-11 14:08:45.508106] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.741 [2024-06-11 14:08:45.508132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.741 [2024-06-11 14:08:45.508143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.741 [2024-06-11 14:08:45.508148] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.741 [2024-06-11 14:08:45.508152] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.741 [2024-06-11 14:08:45.517674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.741 qpair failed and we were unable to recover it. 00:40:52.741 [2024-06-11 14:08:45.527614] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.741 [2024-06-11 14:08:45.527640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.741 [2024-06-11 14:08:45.527650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.741 [2024-06-11 14:08:45.527654] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.741 [2024-06-11 14:08:45.527658] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.741 [2024-06-11 14:08:45.537765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.741 qpair failed and we were unable to recover it. 00:40:52.741 [2024-06-11 14:08:45.548391] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.741 [2024-06-11 14:08:45.548418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.741 [2024-06-11 14:08:45.548428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.741 [2024-06-11 14:08:45.548432] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.741 [2024-06-11 14:08:45.548436] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.741 [2024-06-11 14:08:45.557962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.741 qpair failed and we were unable to recover it. 00:40:52.741 [2024-06-11 14:08:45.568465] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.741 [2024-06-11 14:08:45.568491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.741 [2024-06-11 14:08:45.568500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.742 [2024-06-11 14:08:45.568505] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.742 [2024-06-11 14:08:45.568509] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.742 [2024-06-11 14:08:45.577541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.742 qpair failed and we were unable to recover it. 00:40:52.742 [2024-06-11 14:08:45.588460] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.742 [2024-06-11 14:08:45.588490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.742 [2024-06-11 14:08:45.588499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.742 [2024-06-11 14:08:45.588503] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.742 [2024-06-11 14:08:45.588510] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.742 [2024-06-11 14:08:45.597846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.742 qpair failed and we were unable to recover it. 00:40:52.742 [2024-06-11 14:08:45.608274] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.742 [2024-06-11 14:08:45.608304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.742 [2024-06-11 14:08:45.608313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.742 [2024-06-11 14:08:45.608317] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.742 [2024-06-11 14:08:45.608321] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.742 [2024-06-11 14:08:45.618027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.742 qpair failed and we were unable to recover it. 00:40:52.742 [2024-06-11 14:08:45.628270] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.742 [2024-06-11 14:08:45.628306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.742 [2024-06-11 14:08:45.628316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.742 [2024-06-11 14:08:45.628321] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.742 [2024-06-11 14:08:45.628325] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:52.742 [2024-06-11 14:08:45.637952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:52.742 qpair failed and we were unable to recover it. 00:40:52.742 [2024-06-11 14:08:45.648755] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:52.742 [2024-06-11 14:08:45.648785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:52.742 [2024-06-11 14:08:45.648794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:52.742 [2024-06-11 14:08:45.648799] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:52.742 [2024-06-11 14:08:45.648803] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.003 [2024-06-11 14:08:45.658449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.003 qpair failed and we were unable to recover it. 00:40:53.003 [2024-06-11 14:08:45.668928] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.003 [2024-06-11 14:08:45.668954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.003 [2024-06-11 14:08:45.668964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.003 [2024-06-11 14:08:45.668969] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.003 [2024-06-11 14:08:45.668973] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.003 [2024-06-11 14:08:45.678021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.003 qpair failed and we were unable to recover it. 00:40:53.003 [2024-06-11 14:08:45.688129] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.003 [2024-06-11 14:08:45.688159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.003 [2024-06-11 14:08:45.688168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.003 [2024-06-11 14:08:45.688173] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.003 [2024-06-11 14:08:45.688177] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.003 [2024-06-11 14:08:45.698181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.003 qpair failed and we were unable to recover it. 00:40:53.003 [2024-06-11 14:08:45.708215] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.003 [2024-06-11 14:08:45.708252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.003 [2024-06-11 14:08:45.708261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.003 [2024-06-11 14:08:45.708266] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.003 [2024-06-11 14:08:45.708270] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.003 [2024-06-11 14:08:45.718348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.003 qpair failed and we were unable to recover it. 00:40:53.003 [2024-06-11 14:08:45.728897] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.003 [2024-06-11 14:08:45.728931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.003 [2024-06-11 14:08:45.728940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.003 [2024-06-11 14:08:45.728945] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.003 [2024-06-11 14:08:45.728949] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.003 [2024-06-11 14:08:45.738235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.003 qpair failed and we were unable to recover it. 00:40:53.003 [2024-06-11 14:08:45.748648] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.003 [2024-06-11 14:08:45.748679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.003 [2024-06-11 14:08:45.748688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.003 [2024-06-11 14:08:45.748693] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.003 [2024-06-11 14:08:45.748697] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.003 [2024-06-11 14:08:45.758051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.003 qpair failed and we were unable to recover it. 00:40:53.003 [2024-06-11 14:08:45.768627] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.003 [2024-06-11 14:08:45.768653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.003 [2024-06-11 14:08:45.768662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.003 [2024-06-11 14:08:45.768669] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.003 [2024-06-11 14:08:45.768674] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.003 [2024-06-11 14:08:45.778381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.003 qpair failed and we were unable to recover it. 00:40:53.003 [2024-06-11 14:08:45.788977] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.003 [2024-06-11 14:08:45.789011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.003 [2024-06-11 14:08:45.789023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.003 [2024-06-11 14:08:45.789028] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.003 [2024-06-11 14:08:45.789032] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.003 [2024-06-11 14:08:45.798447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.003 qpair failed and we were unable to recover it. 00:40:53.003 [2024-06-11 14:08:45.809111] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.003 [2024-06-11 14:08:45.809136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.003 [2024-06-11 14:08:45.809146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.003 [2024-06-11 14:08:45.809150] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.003 [2024-06-11 14:08:45.809154] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.003 [2024-06-11 14:08:45.818118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.003 qpair failed and we were unable to recover it. 00:40:53.003 [2024-06-11 14:08:45.828461] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.003 [2024-06-11 14:08:45.828500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.003 [2024-06-11 14:08:45.828510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.003 [2024-06-11 14:08:45.828514] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.003 [2024-06-11 14:08:45.828518] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.003 [2024-06-11 14:08:45.838718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.003 qpair failed and we were unable to recover it. 00:40:53.003 [2024-06-11 14:08:45.848947] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.003 [2024-06-11 14:08:45.848976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.003 [2024-06-11 14:08:45.848985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.003 [2024-06-11 14:08:45.848990] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.003 [2024-06-11 14:08:45.848994] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.004 [2024-06-11 14:08:45.858450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.004 qpair failed and we were unable to recover it. 00:40:53.004 [2024-06-11 14:08:45.869389] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.004 [2024-06-11 14:08:45.869425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.004 [2024-06-11 14:08:45.869445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.004 [2024-06-11 14:08:45.869451] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.004 [2024-06-11 14:08:45.869455] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.004 [2024-06-11 14:08:45.878615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.004 qpair failed and we were unable to recover it. 00:40:53.004 [2024-06-11 14:08:45.889562] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.004 [2024-06-11 14:08:45.889588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.004 [2024-06-11 14:08:45.889599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.004 [2024-06-11 14:08:45.889603] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.004 [2024-06-11 14:08:45.889608] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.004 [2024-06-11 14:08:45.898799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.004 qpair failed and we were unable to recover it. 00:40:53.004 [2024-06-11 14:08:45.909151] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.004 [2024-06-11 14:08:45.909177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.004 [2024-06-11 14:08:45.909187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.004 [2024-06-11 14:08:45.909192] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.004 [2024-06-11 14:08:45.909197] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.265 [2024-06-11 14:08:45.918807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.265 qpair failed and we were unable to recover it. 00:40:53.265 [2024-06-11 14:08:45.929279] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.265 [2024-06-11 14:08:45.929305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.265 [2024-06-11 14:08:45.929315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.265 [2024-06-11 14:08:45.929319] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.265 [2024-06-11 14:08:45.929323] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.265 [2024-06-11 14:08:45.938805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.265 qpair failed and we were unable to recover it. 00:40:53.265 [2024-06-11 14:08:45.949566] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.265 [2024-06-11 14:08:45.949599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.265 [2024-06-11 14:08:45.949612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.265 [2024-06-11 14:08:45.949617] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.265 [2024-06-11 14:08:45.949621] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.265 [2024-06-11 14:08:45.958973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.265 qpair failed and we were unable to recover it. 00:40:53.265 [2024-06-11 14:08:45.969500] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.265 [2024-06-11 14:08:45.969527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.265 [2024-06-11 14:08:45.969537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.265 [2024-06-11 14:08:45.969542] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.265 [2024-06-11 14:08:45.969546] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.265 [2024-06-11 14:08:45.979071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.265 qpair failed and we were unable to recover it. 00:40:53.265 [2024-06-11 14:08:45.989581] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.265 [2024-06-11 14:08:45.989607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.265 [2024-06-11 14:08:45.989616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.265 [2024-06-11 14:08:45.989621] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.265 [2024-06-11 14:08:45.989625] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.265 [2024-06-11 14:08:45.999058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.265 qpair failed and we were unable to recover it. 00:40:53.265 [2024-06-11 14:08:46.009411] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.265 [2024-06-11 14:08:46.009439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.265 [2024-06-11 14:08:46.009448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.265 [2024-06-11 14:08:46.009453] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.265 [2024-06-11 14:08:46.009457] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.265 [2024-06-11 14:08:46.019117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.265 qpair failed and we were unable to recover it. 00:40:53.265 [2024-06-11 14:08:46.029551] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.265 [2024-06-11 14:08:46.029583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.265 [2024-06-11 14:08:46.029593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.265 [2024-06-11 14:08:46.029597] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.265 [2024-06-11 14:08:46.029604] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.265 [2024-06-11 14:08:46.039175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.265 qpair failed and we were unable to recover it. 00:40:53.265 [2024-06-11 14:08:46.049567] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.265 [2024-06-11 14:08:46.049592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.266 [2024-06-11 14:08:46.049601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.266 [2024-06-11 14:08:46.049606] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.266 [2024-06-11 14:08:46.049610] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.266 [2024-06-11 14:08:46.059120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.266 qpair failed and we were unable to recover it. 00:40:53.266 [2024-06-11 14:08:46.069785] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.266 [2024-06-11 14:08:46.069814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.266 [2024-06-11 14:08:46.069823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.266 [2024-06-11 14:08:46.069827] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.266 [2024-06-11 14:08:46.069832] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.266 [2024-06-11 14:08:46.079292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.266 qpair failed and we were unable to recover it. 00:40:53.266 [2024-06-11 14:08:46.089593] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.266 [2024-06-11 14:08:46.089622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.266 [2024-06-11 14:08:46.089631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.266 [2024-06-11 14:08:46.089635] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.266 [2024-06-11 14:08:46.089640] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.266 [2024-06-11 14:08:46.099338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.266 qpair failed and we were unable to recover it. 00:40:53.266 [2024-06-11 14:08:46.109839] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.266 [2024-06-11 14:08:46.109872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.266 [2024-06-11 14:08:46.109882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.266 [2024-06-11 14:08:46.109886] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.266 [2024-06-11 14:08:46.109890] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.266 [2024-06-11 14:08:46.119304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.266 qpair failed and we were unable to recover it. 00:40:53.266 [2024-06-11 14:08:46.130031] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.266 [2024-06-11 14:08:46.130064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.266 [2024-06-11 14:08:46.130084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.266 [2024-06-11 14:08:46.130089] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.266 [2024-06-11 14:08:46.130094] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.266 [2024-06-11 14:08:46.139519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.266 qpair failed and we were unable to recover it. 00:40:53.266 [2024-06-11 14:08:46.149797] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.266 [2024-06-11 14:08:46.149832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.266 [2024-06-11 14:08:46.149843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.266 [2024-06-11 14:08:46.149848] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.266 [2024-06-11 14:08:46.149853] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.266 [2024-06-11 14:08:46.159460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.266 qpair failed and we were unable to recover it. 00:40:53.266 [2024-06-11 14:08:46.169704] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.266 [2024-06-11 14:08:46.169730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.266 [2024-06-11 14:08:46.169739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.266 [2024-06-11 14:08:46.169744] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.266 [2024-06-11 14:08:46.169748] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.179503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.189855] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.189882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.189891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.189896] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.527 [2024-06-11 14:08:46.189900] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.199697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.210251] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.210284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.210304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.210313] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.527 [2024-06-11 14:08:46.210317] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.219544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.230292] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.230326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.230346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.230352] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.527 [2024-06-11 14:08:46.230356] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.239783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.250050] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.250075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.250086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.250091] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.527 [2024-06-11 14:08:46.250095] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.259780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.270413] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.270443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.270453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.270458] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.527 [2024-06-11 14:08:46.270462] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.279803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.290413] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.290438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.290447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.290452] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.527 [2024-06-11 14:08:46.290456] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.299885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.310411] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.310447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.310456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.310461] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.527 [2024-06-11 14:08:46.310465] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.319798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.330360] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.330387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.330396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.330401] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.527 [2024-06-11 14:08:46.330405] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.340023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.350469] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.350498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.350507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.350512] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.527 [2024-06-11 14:08:46.350516] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.360207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.370635] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.370659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.370668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.370672] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.527 [2024-06-11 14:08:46.370677] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.380007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.390549] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.390573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.390585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.390590] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.527 [2024-06-11 14:08:46.390594] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.527 [2024-06-11 14:08:46.400189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.527 qpair failed and we were unable to recover it. 00:40:53.527 [2024-06-11 14:08:46.410617] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.527 [2024-06-11 14:08:46.410644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.527 [2024-06-11 14:08:46.410654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.527 [2024-06-11 14:08:46.410658] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.528 [2024-06-11 14:08:46.410663] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.528 [2024-06-11 14:08:46.420177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.528 qpair failed and we were unable to recover it. 00:40:53.528 [2024-06-11 14:08:46.430664] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.528 [2024-06-11 14:08:46.430699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.528 [2024-06-11 14:08:46.430708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.528 [2024-06-11 14:08:46.430713] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.528 [2024-06-11 14:08:46.430717] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.789 [2024-06-11 14:08:46.440352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.789 qpair failed and we were unable to recover it. 00:40:53.789 [2024-06-11 14:08:46.450847] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.789 [2024-06-11 14:08:46.450877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.789 [2024-06-11 14:08:46.450886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.789 [2024-06-11 14:08:46.450891] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.789 [2024-06-11 14:08:46.450895] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.789 [2024-06-11 14:08:46.460169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.789 qpair failed and we were unable to recover it. 00:40:53.789 [2024-06-11 14:08:46.471159] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.789 [2024-06-11 14:08:46.471190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.789 [2024-06-11 14:08:46.471209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.789 [2024-06-11 14:08:46.471215] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.789 [2024-06-11 14:08:46.471223] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.789 [2024-06-11 14:08:46.480328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.789 qpair failed and we were unable to recover it. 00:40:53.789 [2024-06-11 14:08:46.490599] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.789 [2024-06-11 14:08:46.490628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.789 [2024-06-11 14:08:46.490639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.789 [2024-06-11 14:08:46.490644] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.789 [2024-06-11 14:08:46.490648] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.789 [2024-06-11 14:08:46.500286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.789 qpair failed and we were unable to recover it. 00:40:53.789 [2024-06-11 14:08:46.511066] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.789 [2024-06-11 14:08:46.511094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.789 [2024-06-11 14:08:46.511104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.789 [2024-06-11 14:08:46.511108] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.789 [2024-06-11 14:08:46.511113] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.789 [2024-06-11 14:08:46.520631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.789 qpair failed and we were unable to recover it. 00:40:53.789 [2024-06-11 14:08:46.530912] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.789 [2024-06-11 14:08:46.530945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.789 [2024-06-11 14:08:46.530954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.789 [2024-06-11 14:08:46.530959] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.789 [2024-06-11 14:08:46.530963] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.789 [2024-06-11 14:08:46.540375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.789 qpair failed and we were unable to recover it. 00:40:53.789 [2024-06-11 14:08:46.551195] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.789 [2024-06-11 14:08:46.551231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.789 [2024-06-11 14:08:46.551251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.789 [2024-06-11 14:08:46.551256] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.789 [2024-06-11 14:08:46.551261] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.789 [2024-06-11 14:08:46.560400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.789 qpair failed and we were unable to recover it. 00:40:53.789 [2024-06-11 14:08:46.570804] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.790 [2024-06-11 14:08:46.570833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.790 [2024-06-11 14:08:46.570845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.790 [2024-06-11 14:08:46.570850] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.790 [2024-06-11 14:08:46.570855] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.790 [2024-06-11 14:08:46.580723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.790 qpair failed and we were unable to recover it. 00:40:53.790 [2024-06-11 14:08:46.591290] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.790 [2024-06-11 14:08:46.591322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.790 [2024-06-11 14:08:46.591342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.790 [2024-06-11 14:08:46.591347] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.790 [2024-06-11 14:08:46.591352] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.790 [2024-06-11 14:08:46.600847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.790 qpair failed and we were unable to recover it. 00:40:53.790 [2024-06-11 14:08:46.611340] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.790 [2024-06-11 14:08:46.611366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.790 [2024-06-11 14:08:46.611377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.790 [2024-06-11 14:08:46.611382] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.790 [2024-06-11 14:08:46.611386] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.790 [2024-06-11 14:08:46.620724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.790 qpair failed and we were unable to recover it. 00:40:53.790 [2024-06-11 14:08:46.631445] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.790 [2024-06-11 14:08:46.631470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.790 [2024-06-11 14:08:46.631479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.790 [2024-06-11 14:08:46.631484] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.790 [2024-06-11 14:08:46.631489] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.790 [2024-06-11 14:08:46.640926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.790 qpair failed and we were unable to recover it. 00:40:53.790 [2024-06-11 14:08:46.651257] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.790 [2024-06-11 14:08:46.651288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.790 [2024-06-11 14:08:46.651308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.790 [2024-06-11 14:08:46.651317] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.790 [2024-06-11 14:08:46.651322] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.790 [2024-06-11 14:08:46.660848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.790 qpair failed and we were unable to recover it. 00:40:53.790 [2024-06-11 14:08:46.671276] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.790 [2024-06-11 14:08:46.671306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.790 [2024-06-11 14:08:46.671318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.790 [2024-06-11 14:08:46.671322] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.790 [2024-06-11 14:08:46.671327] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:53.790 [2024-06-11 14:08:46.681057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:53.790 qpair failed and we were unable to recover it. 00:40:53.790 [2024-06-11 14:08:46.691169] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:53.790 [2024-06-11 14:08:46.691196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:53.790 [2024-06-11 14:08:46.691206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:53.790 [2024-06-11 14:08:46.691210] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:53.790 [2024-06-11 14:08:46.691215] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:54.050 [2024-06-11 14:08:46.700936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:54.050 qpair failed and we were unable to recover it. 00:40:54.050 [2024-06-11 14:08:46.711515] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:54.050 [2024-06-11 14:08:46.711545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:54.050 [2024-06-11 14:08:46.711555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:54.050 [2024-06-11 14:08:46.711560] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:54.050 [2024-06-11 14:08:46.711564] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:40:54.050 [2024-06-11 14:08:46.721033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:40:54.050 qpair failed and we were unable to recover it. 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Write completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 Read completed with error (sct=0, sc=8) 00:40:54.991 starting I/O failed 00:40:54.991 [2024-06-11 14:08:47.726620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:54.991 [2024-06-11 14:08:47.734021] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:54.991 [2024-06-11 14:08:47.734055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:54.991 [2024-06-11 14:08:47.734072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:54.991 [2024-06-11 14:08:47.734080] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:54.991 [2024-06-11 14:08:47.734088] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002bab80 00:40:54.991 [2024-06-11 14:08:47.743981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:54.991 qpair failed and we were unable to recover it. 00:40:54.991 [2024-06-11 14:08:47.754679] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:54.991 [2024-06-11 14:08:47.754714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:54.991 [2024-06-11 14:08:47.754728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:54.991 [2024-06-11 14:08:47.754735] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:54.991 [2024-06-11 14:08:47.754741] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002bab80 00:40:54.991 [2024-06-11 14:08:47.764077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:40:54.991 qpair failed and we were unable to recover it. 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Read completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 Write completed with error (sct=0, sc=8) 00:40:55.931 starting I/O failed 00:40:55.931 [2024-06-11 14:08:48.769761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:55.931 [2024-06-11 14:08:48.777078] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:55.931 [2024-06-11 14:08:48.777110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:55.931 [2024-06-11 14:08:48.777122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:55.932 [2024-06-11 14:08:48.777128] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:55.932 [2024-06-11 14:08:48.777133] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:40:55.932 [2024-06-11 14:08:48.787004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:55.932 qpair failed and we were unable to recover it. 00:40:55.932 [2024-06-11 14:08:48.797508] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:55.932 [2024-06-11 14:08:48.797536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:55.932 [2024-06-11 14:08:48.797547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:55.932 [2024-06-11 14:08:48.797552] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:55.932 [2024-06-11 14:08:48.797557] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:40:55.932 [2024-06-11 14:08:48.806734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:40:55.932 qpair failed and we were unable to recover it. 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Read completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 Write completed with error (sct=0, sc=8) 00:40:57.316 starting I/O failed 00:40:57.316 [2024-06-11 14:08:49.812380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:57.316 [2024-06-11 14:08:49.819774] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:57.316 [2024-06-11 14:08:49.819808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:57.316 [2024-06-11 14:08:49.819825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:57.316 [2024-06-11 14:08:49.819833] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:57.316 [2024-06-11 14:08:49.819840] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:40:57.316 [2024-06-11 14:08:49.830135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:57.316 qpair failed and we were unable to recover it. 00:40:57.316 [2024-06-11 14:08:49.840511] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:40:57.316 [2024-06-11 14:08:49.840549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:40:57.316 [2024-06-11 14:08:49.840575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:40:57.316 [2024-06-11 14:08:49.840583] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:57.316 [2024-06-11 14:08:49.840589] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:40:57.316 [2024-06-11 14:08:49.849992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:40:57.316 qpair failed and we were unable to recover it. 00:40:57.316 [2024-06-11 14:08:49.850163] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:40:57.316 A controller has encountered a failure and is being reset. 00:40:57.316 [2024-06-11 14:08:49.850290] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:40:57.316 [2024-06-11 14:08:49.888644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:40:57.316 Controller properly reset. 00:40:57.316 Initializing NVMe Controllers 00:40:57.316 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:40:57.316 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:40:57.316 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:40:57.316 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:40:57.316 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:40:57.316 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:40:57.316 Initialization complete. Launching workers. 00:40:57.316 Starting thread on core 1 00:40:57.316 Starting thread on core 2 00:40:57.316 Starting thread on core 3 00:40:57.316 Starting thread on core 0 00:40:57.316 14:08:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:40:57.316 00:40:57.316 real 0m14.566s 00:40:57.316 user 0m28.092s 00:40:57.316 sys 0m2.524s 00:40:57.316 14:08:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:57.316 14:08:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:40:57.316 ************************************ 00:40:57.316 END TEST nvmf_target_disconnect_tc2 00:40:57.316 ************************************ 00:40:57.316 14:08:49 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:40:57.316 14:08:49 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:40:57.316 14:08:49 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:57.316 14:08:49 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:57.316 14:08:49 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:40:57.316 ************************************ 00:40:57.316 START TEST nvmf_target_disconnect_tc3 00:40:57.316 ************************************ 00:40:57.316 14:08:50 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc3 00:40:57.316 14:08:50 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=2418261 00:40:57.316 14:08:50 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:40:57.316 14:08:50 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:40:57.316 EAL: No free 2048 kB hugepages reported on node 1 00:40:59.265 14:08:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 2416515 00:40:59.265 14:08:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:41:00.680 Read completed with error (sct=0, sc=8) 00:41:00.680 starting I/O failed 00:41:00.680 Write completed with error (sct=0, sc=8) 00:41:00.680 starting I/O failed 00:41:00.680 Read completed with error (sct=0, sc=8) 00:41:00.680 starting I/O failed 00:41:00.680 Write completed with error (sct=0, sc=8) 00:41:00.680 starting I/O failed 00:41:00.680 Write completed with error (sct=0, sc=8) 00:41:00.680 starting I/O failed 00:41:00.680 Read completed with error (sct=0, sc=8) 00:41:00.680 starting I/O failed 00:41:00.680 Read completed with error (sct=0, sc=8) 00:41:00.680 starting I/O failed 00:41:00.681 Write completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Write completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Write completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Write completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Write completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Write completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Write completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Write completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Write completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Write completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 Read completed with error (sct=0, sc=8) 00:41:00.681 starting I/O failed 00:41:00.681 [2024-06-11 14:08:53.212129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:00.681 [2024-06-11 14:08:53.214908] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:41:00.681 [2024-06-11 14:08:53.214921] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:41:00.681 [2024-06-11 14:08:53.214926] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:41:01.251 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 2416515 Killed "${NVMF_APP[@]}" "$@" 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2418946 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2418946 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@830 -- # '[' -z 2418946 ']' 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:01.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:01.251 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:41:01.251 [2024-06-11 14:08:54.104883] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:41:01.251 [2024-06-11 14:08:54.104935] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:01.251 EAL: No free 2048 kB hugepages reported on node 1 00:41:01.512 [2024-06-11 14:08:54.183151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:01.512 [2024-06-11 14:08:54.219052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:01.512 qpair failed and we were unable to recover it. 00:41:01.513 [2024-06-11 14:08:54.221442] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:41:01.513 [2024-06-11 14:08:54.221454] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:41:01.513 [2024-06-11 14:08:54.221459] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:41:01.513 [2024-06-11 14:08:54.238088] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:01.513 [2024-06-11 14:08:54.238116] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:01.513 [2024-06-11 14:08:54.238125] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:01.513 [2024-06-11 14:08:54.238129] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:01.513 [2024-06-11 14:08:54.238133] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:01.513 [2024-06-11 14:08:54.238274] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:41:01.513 [2024-06-11 14:08:54.238427] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:41:01.513 [2024-06-11 14:08:54.238584] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:41:01.513 [2024-06-11 14:08:54.238586] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@863 -- # return 0 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:41:02.082 Malloc0 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.082 14:08:54 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:41:02.082 [2024-06-11 14:08:54.957370] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5e1110/0x5ecc80) succeed. 00:41:02.082 [2024-06-11 14:08:54.968579] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5e2750/0x62e310) succeed. 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:41:02.343 [2024-06-11 14:08:55.100580] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:02.343 14:08:55 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 2418261 00:41:02.343 [2024-06-11 14:08:55.225826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:02.343 qpair failed and we were unable to recover it. 00:41:02.343 [2024-06-11 14:08:55.228318] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:41:02.343 [2024-06-11 14:08:55.228329] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:41:02.343 [2024-06-11 14:08:55.228333] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:41:03.724 [2024-06-11 14:08:56.232634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:03.724 qpair failed and we were unable to recover it. 00:41:03.724 [2024-06-11 14:08:56.234962] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:41:03.724 [2024-06-11 14:08:56.234973] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:41:03.724 [2024-06-11 14:08:56.234978] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:41:04.663 [2024-06-11 14:08:57.239431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:04.663 qpair failed and we were unable to recover it. 00:41:04.663 [2024-06-11 14:08:57.241949] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:41:04.663 [2024-06-11 14:08:57.241960] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:41:04.663 [2024-06-11 14:08:57.241965] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:41:05.604 [2024-06-11 14:08:58.246358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:05.604 qpair failed and we were unable to recover it. 00:41:05.604 [2024-06-11 14:08:58.248929] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:41:05.604 [2024-06-11 14:08:58.248940] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:41:05.604 [2024-06-11 14:08:58.248945] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:41:06.546 [2024-06-11 14:08:59.253364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:06.546 qpair failed and we were unable to recover it. 00:41:06.546 [2024-06-11 14:08:59.255902] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:41:06.546 [2024-06-11 14:08:59.255917] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:41:06.546 [2024-06-11 14:08:59.255922] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:41:07.488 [2024-06-11 14:09:00.260146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:07.488 qpair failed and we were unable to recover it. 00:41:07.488 [2024-06-11 14:09:00.262830] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:41:07.488 [2024-06-11 14:09:00.262843] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:41:07.488 [2024-06-11 14:09:00.262849] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:41:08.432 [2024-06-11 14:09:01.267396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:08.432 qpair failed and we were unable to recover it. 00:41:09.391 Read completed with error (sct=0, sc=8) 00:41:09.391 starting I/O failed 00:41:09.391 Write completed with error (sct=0, sc=8) 00:41:09.391 starting I/O failed 00:41:09.391 Read completed with error (sct=0, sc=8) 00:41:09.391 starting I/O failed 00:41:09.391 Read completed with error (sct=0, sc=8) 00:41:09.391 starting I/O failed 00:41:09.391 Read completed with error (sct=0, sc=8) 00:41:09.391 starting I/O failed 00:41:09.391 Write completed with error (sct=0, sc=8) 00:41:09.391 starting I/O failed 00:41:09.391 Write completed with error (sct=0, sc=8) 00:41:09.391 starting I/O failed 00:41:09.391 Read completed with error (sct=0, sc=8) 00:41:09.391 starting I/O failed 00:41:09.391 Read completed with error (sct=0, sc=8) 00:41:09.391 starting I/O failed 00:41:09.391 Write completed with error (sct=0, sc=8) 00:41:09.391 starting I/O failed 00:41:09.391 Write completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Read completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Write completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Read completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Read completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Read completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Read completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Write completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Write completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Read completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Write completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Write completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Read completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Read completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Write completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Read completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Write completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Read completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Write completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Read completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Write completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 Write completed with error (sct=0, sc=8) 00:41:09.392 starting I/O failed 00:41:09.392 [2024-06-11 14:09:02.273357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:09.392 [2024-06-11 14:09:02.276057] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:41:09.392 [2024-06-11 14:09:02.276071] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:41:09.392 [2024-06-11 14:09:02.276077] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:41:10.780 [2024-06-11 14:09:03.280261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:10.780 qpair failed and we were unable to recover it. 00:41:10.780 [2024-06-11 14:09:03.282341] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:41:10.780 [2024-06-11 14:09:03.282353] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:41:10.780 [2024-06-11 14:09:03.282361] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:41:11.721 [2024-06-11 14:09:04.286681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:11.721 qpair failed and we were unable to recover it. 00:41:11.721 [2024-06-11 14:09:04.286811] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:41:11.721 A controller has encountered a failure and is being reset. 00:41:11.721 Resorting to new failover address 192.168.100.9 00:41:11.721 [2024-06-11 14:09:04.286915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:41:11.721 [2024-06-11 14:09:04.286973] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:41:11.721 [2024-06-11 14:09:04.289725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:41:11.721 Controller properly reset. 00:41:12.663 Read completed with error (sct=0, sc=8) 00:41:12.663 starting I/O failed 00:41:12.663 Read completed with error (sct=0, sc=8) 00:41:12.663 starting I/O failed 00:41:12.663 Write completed with error (sct=0, sc=8) 00:41:12.663 starting I/O failed 00:41:12.663 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Write completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 Read completed with error (sct=0, sc=8) 00:41:12.664 starting I/O failed 00:41:12.664 [2024-06-11 14:09:05.327981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:13.605 Write completed with error (sct=0, sc=8) 00:41:13.605 starting I/O failed 00:41:13.605 Read completed with error (sct=0, sc=8) 00:41:13.605 starting I/O failed 00:41:13.605 Read completed with error (sct=0, sc=8) 00:41:13.605 starting I/O failed 00:41:13.605 Write completed with error (sct=0, sc=8) 00:41:13.605 starting I/O failed 00:41:13.605 Write completed with error (sct=0, sc=8) 00:41:13.605 starting I/O failed 00:41:13.605 Read completed with error (sct=0, sc=8) 00:41:13.605 starting I/O failed 00:41:13.605 Read completed with error (sct=0, sc=8) 00:41:13.605 starting I/O failed 00:41:13.605 Write completed with error (sct=0, sc=8) 00:41:13.605 starting I/O failed 00:41:13.605 Write completed with error (sct=0, sc=8) 00:41:13.605 starting I/O failed 00:41:13.605 Read completed with error (sct=0, sc=8) 00:41:13.605 starting I/O failed 00:41:13.605 Write completed with error (sct=0, sc=8) 00:41:13.605 starting I/O failed 00:41:13.605 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Read completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Read completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Read completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Read completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Read completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Read completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Read completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Write completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 Read completed with error (sct=0, sc=8) 00:41:13.606 starting I/O failed 00:41:13.606 [2024-06-11 14:09:06.362809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:41:13.606 Initializing NVMe Controllers 00:41:13.606 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:41:13.606 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:41:13.606 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:41:13.606 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:41:13.606 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:41:13.606 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:41:13.606 Initialization complete. Launching workers. 00:41:13.606 Starting thread on core 1 00:41:13.606 Starting thread on core 2 00:41:13.606 Starting thread on core 3 00:41:13.606 Starting thread on core 0 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:41:13.606 00:41:13.606 real 0m16.395s 00:41:13.606 user 1m3.235s 00:41:13.606 sys 0m3.532s 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:41:13.606 ************************************ 00:41:13.606 END TEST nvmf_target_disconnect_tc3 00:41:13.606 ************************************ 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:41:13.606 rmmod nvme_rdma 00:41:13.606 rmmod nvme_fabrics 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:13.606 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:41:13.866 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:41:13.866 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2418946 ']' 00:41:13.866 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2418946 00:41:13.866 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 2418946 ']' 00:41:13.866 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 2418946 00:41:13.866 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:41:13.866 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:13.866 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2418946 00:41:13.866 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:41:13.867 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:41:13.867 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2418946' 00:41:13.867 killing process with pid 2418946 00:41:13.867 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 2418946 00:41:13.867 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 2418946 00:41:13.867 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:13.867 14:09:06 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:41:13.867 00:41:13.867 real 0m39.590s 00:41:13.867 user 2m24.112s 00:41:13.867 sys 0m11.668s 00:41:13.867 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:13.867 14:09:06 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:13.867 ************************************ 00:41:13.867 END TEST nvmf_target_disconnect 00:41:13.867 ************************************ 00:41:14.129 14:09:06 nvmf_rdma -- nvmf/nvmf.sh@125 -- # timing_exit host 00:41:14.129 14:09:06 nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:14.129 14:09:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:14.129 14:09:06 nvmf_rdma -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:41:14.129 00:41:14.129 real 32m34.641s 00:41:14.129 user 95m19.131s 00:41:14.129 sys 6m12.990s 00:41:14.129 14:09:06 nvmf_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:14.129 14:09:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:14.129 ************************************ 00:41:14.129 END TEST nvmf_rdma 00:41:14.129 ************************************ 00:41:14.129 14:09:06 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:41:14.129 14:09:06 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:41:14.129 14:09:06 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:14.129 14:09:06 -- common/autotest_common.sh@10 -- # set +x 00:41:14.129 ************************************ 00:41:14.129 START TEST spdkcli_nvmf_rdma 00:41:14.129 ************************************ 00:41:14.129 14:09:06 spdkcli_nvmf_rdma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:41:14.129 * Looking for test storage... 00:41:14.129 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:41:14.129 14:09:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:41:14.129 14:09:07 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:14.129 14:09:07 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:41:14.129 14:09:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:41:14.129 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:41:14.129 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:14.129 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:14.129 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:14.129 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:14.129 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2421590 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 2421590 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@830 -- # '[' -z 2421590 ']' 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:14.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:14.391 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:14.391 [2024-06-11 14:09:07.120477] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:41:14.392 [2024-06-11 14:09:07.120547] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421590 ] 00:41:14.392 EAL: No free 2048 kB hugepages reported on node 1 00:41:14.392 [2024-06-11 14:09:07.187813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:14.392 [2024-06-11 14:09:07.263933] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.392 [2024-06-11 14:09:07.263936] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@863 -- # return 0 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:41:15.335 14:09:07 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:21.923 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:41:21.924 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:41:21.924 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:41:21.924 Found net devices under 0000:98:00.0: mlx_0_0 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:41:21.924 Found net devices under 0000:98:00.1: mlx_0_1 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:41:21.924 22: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:41:21.924 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:41:21.924 altname enp152s0f0np0 00:41:21.924 altname ens817f0np0 00:41:21.924 inet 192.168.100.8/24 scope global mlx_0_0 00:41:21.924 valid_lft forever preferred_lft forever 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:41:21.924 23: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:41:21.924 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:41:21.924 altname enp152s0f1np1 00:41:21.924 altname ens817f1np1 00:41:21.924 inet 192.168.100.9/24 scope global mlx_0_1 00:41:21.924 valid_lft forever preferred_lft forever 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:41:21.924 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:41:21.925 192.168.100.9' 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:41:21.925 192.168.100.9' 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:41:21.925 192.168.100.9' 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:41:21.925 14:09:14 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:41:22.186 14:09:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:41:22.186 14:09:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:22.186 14:09:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:22.186 14:09:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:22.187 14:09:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:22.187 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:22.187 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:22.187 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:22.187 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:22.187 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:22.187 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:22.187 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:41:22.187 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:41:22.187 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:22.187 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:22.187 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:22.187 ' 00:41:24.732 [2024-06-11 14:09:17.208615] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1416440/0x1546b40) succeed. 00:41:24.732 [2024-06-11 14:09:17.226203] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1417b20/0x14269c0) succeed. 00:41:25.676 [2024-06-11 14:09:18.525478] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:41:28.222 [2024-06-11 14:09:20.928744] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:41:30.134 [2024-06-11 14:09:23.003404] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:41:32.047 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:32.047 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:32.047 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:32.047 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:32.047 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:32.047 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:32.047 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:32.047 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:41:32.047 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:41:32.047 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:32.047 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:32.047 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:32.047 14:09:24 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:32.047 14:09:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:32.047 14:09:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:32.047 14:09:24 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:32.047 14:09:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:32.047 14:09:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:32.047 14:09:24 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:41:32.047 14:09:24 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:32.307 14:09:25 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:32.307 14:09:25 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:32.307 14:09:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:32.307 14:09:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:32.307 14:09:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:32.307 14:09:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:32.307 14:09:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:32.307 14:09:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:32.307 14:09:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:32.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:32.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:32.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:32.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:41:32.307 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:41:32.307 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:32.307 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:32.307 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:32.307 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:32.307 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:32.307 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:32.307 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:32.307 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:32.308 ' 00:41:37.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:37.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:37.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:37.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:37.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:41:37.663 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:41:37.663 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:37.663 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:37.663 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:37.663 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:37.663 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:37.663 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:37.663 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:37.663 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 2421590 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@949 -- # '[' -z 2421590 ']' 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # kill -0 2421590 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # uname 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2421590 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2421590' 00:41:37.663 killing process with pid 2421590 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # kill 2421590 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # wait 2421590 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:41:37.663 rmmod nvme_rdma 00:41:37.663 rmmod nvme_fabrics 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:41:37.663 00:41:37.663 real 0m23.427s 00:41:37.663 user 0m50.662s 00:41:37.663 sys 0m5.837s 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:37.663 14:09:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:41:37.663 ************************************ 00:41:37.663 END TEST spdkcli_nvmf_rdma 00:41:37.663 ************************************ 00:41:37.663 14:09:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:41:37.663 14:09:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:41:37.663 14:09:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:41:37.663 14:09:30 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:41:37.663 14:09:30 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:41:37.663 14:09:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:41:37.663 14:09:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:41:37.663 14:09:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:41:37.663 14:09:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:41:37.663 14:09:30 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:41:37.663 14:09:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:41:37.663 14:09:30 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:41:37.663 14:09:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:41:37.663 14:09:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:41:37.663 14:09:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:41:37.663 14:09:30 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:41:37.663 14:09:30 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:41:37.663 14:09:30 -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:37.663 14:09:30 -- common/autotest_common.sh@10 -- # set +x 00:41:37.663 14:09:30 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:41:37.663 14:09:30 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:41:37.663 14:09:30 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:41:37.663 14:09:30 -- common/autotest_common.sh@10 -- # set +x 00:41:45.807 INFO: APP EXITING 00:41:45.807 INFO: killing all VMs 00:41:45.807 INFO: killing vhost app 00:41:45.807 INFO: EXIT DONE 00:41:48.386 Waiting for block devices as requested 00:41:48.386 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:48.386 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:48.386 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:48.386 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:48.386 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:48.386 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:48.386 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:48.647 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:48.647 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:48.908 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:48.908 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:48.908 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:48.908 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:49.169 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:49.169 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:49.169 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:49.169 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:52.476 Cleaning 00:41:52.476 Removing: /var/run/dpdk/spdk0/config 00:41:52.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:52.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:52.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:52.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:52.476 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:41:52.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:41:52.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:41:52.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:41:52.737 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:52.737 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:52.737 Removing: /var/run/dpdk/spdk1/config 00:41:52.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:41:52.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:41:52.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:41:52.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:41:52.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:41:52.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:41:52.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:41:52.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:41:52.737 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:41:52.737 Removing: /var/run/dpdk/spdk1/hugepage_info 00:41:52.737 Removing: /var/run/dpdk/spdk1/mp_socket 00:41:52.737 Removing: /var/run/dpdk/spdk2/config 00:41:52.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:41:52.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:41:52.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:41:52.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:41:52.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:41:52.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:41:52.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:41:52.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:41:52.737 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:41:52.737 Removing: /var/run/dpdk/spdk2/hugepage_info 00:41:52.737 Removing: /var/run/dpdk/spdk3/config 00:41:52.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:41:52.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:41:52.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:41:52.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:41:52.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:41:52.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:41:52.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:41:52.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:41:52.737 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:41:52.737 Removing: /var/run/dpdk/spdk3/hugepage_info 00:41:52.737 Removing: /var/run/dpdk/spdk4/config 00:41:52.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:41:52.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:41:52.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:41:52.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:41:52.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:41:52.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:41:52.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:41:52.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:41:52.737 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:41:52.737 Removing: /var/run/dpdk/spdk4/hugepage_info 00:41:52.737 Removing: /dev/shm/bdevperf_trace.pid2090813 00:41:52.737 Removing: /dev/shm/bdevperf_trace.pid2307423 00:41:52.737 Removing: /dev/shm/bdev_svc_trace.1 00:41:52.737 Removing: /dev/shm/nvmf_trace.0 00:41:52.737 Removing: /dev/shm/spdk_tgt_trace.pid1896017 00:41:52.737 Removing: /var/run/dpdk/spdk0 00:41:52.737 Removing: /var/run/dpdk/spdk1 00:41:52.737 Removing: /var/run/dpdk/spdk2 00:41:52.737 Removing: /var/run/dpdk/spdk3 00:41:52.737 Removing: /var/run/dpdk/spdk4 00:41:52.737 Removing: /var/run/dpdk/spdk_pid1894532 00:41:52.737 Removing: /var/run/dpdk/spdk_pid1896017 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1896844 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1897885 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1898228 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1899291 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1899475 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1899743 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1904532 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1905117 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1905406 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1905768 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1906180 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1906500 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1906671 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1906963 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1907347 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1908431 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1911987 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1912302 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1912568 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1912724 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1913107 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1913434 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1913813 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1914049 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1914260 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1914527 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1914715 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1914895 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1915332 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1915687 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1916079 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1916278 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1916492 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1916580 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1916998 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1917336 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1917526 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1917740 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1918092 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1918650 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1919188 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1919454 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1919643 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1919979 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1920326 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1920677 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1920902 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1921092 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1921414 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1921772 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1922122 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1922409 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1922623 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1922867 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1923175 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1923509 00:41:52.999 Removing: /var/run/dpdk/spdk_pid1928064 00:41:52.999 Removing: /var/run/dpdk/spdk_pid2045221 00:41:52.999 Removing: /var/run/dpdk/spdk_pid2049968 00:41:52.999 Removing: /var/run/dpdk/spdk_pid2062524 00:41:52.999 Removing: /var/run/dpdk/spdk_pid2068632 00:41:52.999 Removing: /var/run/dpdk/spdk_pid2072702 00:41:52.999 Removing: /var/run/dpdk/spdk_pid2073716 00:41:52.999 Removing: /var/run/dpdk/spdk_pid2090813 00:41:52.999 Removing: /var/run/dpdk/spdk_pid2091165 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2095740 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2102397 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2105580 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2117612 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2146361 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2150562 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2208311 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2214198 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2254195 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2272315 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2305095 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2306195 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2307423 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2312125 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2320254 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2321323 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2322360 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2323420 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2323762 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2329388 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2329393 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2334151 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2334742 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2335387 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2336162 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2336188 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2337859 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2340096 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2342198 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2344235 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2346496 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2348532 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2355624 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2356174 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2357327 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2358514 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2364399 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2367497 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2374449 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2385389 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2385394 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2408240 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2408457 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2415159 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2415670 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2418261 00:41:53.260 Removing: /var/run/dpdk/spdk_pid2421590 00:41:53.260 Clean 00:41:53.260 14:09:46 -- common/autotest_common.sh@1450 -- # return 0 00:41:53.260 14:09:46 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:41:53.260 14:09:46 -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:53.260 14:09:46 -- common/autotest_common.sh@10 -- # set +x 00:41:53.520 14:09:46 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:41:53.520 14:09:46 -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:53.520 14:09:46 -- common/autotest_common.sh@10 -- # set +x 00:41:53.520 14:09:46 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:41:53.520 14:09:46 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:41:53.520 14:09:46 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:41:53.520 14:09:46 -- spdk/autotest.sh@391 -- # hash lcov 00:41:53.520 14:09:46 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:41:53.520 14:09:46 -- spdk/autotest.sh@393 -- # hostname 00:41:53.520 14:09:46 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:41:53.781 geninfo: WARNING: invalid characters removed from testname! 00:42:20.355 14:10:09 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:42:20.355 14:10:11 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:42:20.615 14:10:13 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:42:23.153 14:10:15 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:42:24.535 14:10:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:42:25.916 14:10:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:42:27.298 14:10:20 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:27.299 14:10:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:42:27.299 14:10:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:42:27.299 14:10:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:27.299 14:10:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:27.299 14:10:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:27.299 14:10:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:27.299 14:10:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:27.299 14:10:20 -- paths/export.sh@5 -- $ export PATH 00:42:27.299 14:10:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:27.299 14:10:20 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:42:27.299 14:10:20 -- common/autobuild_common.sh@437 -- $ date +%s 00:42:27.299 14:10:20 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718107820.XXXXXX 00:42:27.299 14:10:20 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718107820.9OU7d7 00:42:27.299 14:10:20 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:42:27.299 14:10:20 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:42:27.299 14:10:20 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:42:27.299 14:10:20 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:42:27.299 14:10:20 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:42:27.299 14:10:20 -- common/autobuild_common.sh@453 -- $ get_config_params 00:42:27.299 14:10:20 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:42:27.299 14:10:20 -- common/autotest_common.sh@10 -- $ set +x 00:42:27.299 14:10:20 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:42:27.299 14:10:20 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:42:27.299 14:10:20 -- pm/common@17 -- $ local monitor 00:42:27.299 14:10:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:27.299 14:10:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:27.299 14:10:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:27.299 14:10:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:27.299 14:10:20 -- pm/common@21 -- $ date +%s 00:42:27.299 14:10:20 -- pm/common@25 -- $ sleep 1 00:42:27.299 14:10:20 -- pm/common@21 -- $ date +%s 00:42:27.299 14:10:20 -- pm/common@21 -- $ date +%s 00:42:27.299 14:10:20 -- pm/common@21 -- $ date +%s 00:42:27.299 14:10:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718107820 00:42:27.299 14:10:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718107820 00:42:27.299 14:10:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718107820 00:42:27.299 14:10:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718107820 00:42:27.562 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718107820_collect-vmstat.pm.log 00:42:27.562 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718107820_collect-cpu-load.pm.log 00:42:27.562 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718107820_collect-cpu-temp.pm.log 00:42:27.562 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718107820_collect-bmc-pm.bmc.pm.log 00:42:28.507 14:10:21 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:42:28.507 14:10:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:42:28.507 14:10:21 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:42:28.507 14:10:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:42:28.507 14:10:21 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:42:28.507 14:10:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:42:28.507 14:10:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:42:28.507 14:10:21 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:28.507 14:10:21 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:42:28.507 14:10:21 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:42:28.507 14:10:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:42:28.507 14:10:21 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:42:28.507 14:10:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:42:28.507 14:10:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:42:28.507 14:10:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:28.507 14:10:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:42:28.507 14:10:21 -- pm/common@44 -- $ pid=2441096 00:42:28.507 14:10:21 -- pm/common@50 -- $ kill -TERM 2441096 00:42:28.507 14:10:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:28.507 14:10:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:42:28.507 14:10:21 -- pm/common@44 -- $ pid=2441097 00:42:28.507 14:10:21 -- pm/common@50 -- $ kill -TERM 2441097 00:42:28.507 14:10:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:28.507 14:10:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:42:28.507 14:10:21 -- pm/common@44 -- $ pid=2441099 00:42:28.507 14:10:21 -- pm/common@50 -- $ kill -TERM 2441099 00:42:28.507 14:10:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:28.507 14:10:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:42:28.507 14:10:21 -- pm/common@44 -- $ pid=2441122 00:42:28.507 14:10:21 -- pm/common@50 -- $ sudo -E kill -TERM 2441122 00:42:28.507 + [[ -n 1776222 ]] 00:42:28.507 + sudo kill 1776222 00:42:28.518 [Pipeline] } 00:42:28.534 [Pipeline] // stage 00:42:28.540 [Pipeline] } 00:42:28.558 [Pipeline] // timeout 00:42:28.563 [Pipeline] } 00:42:28.583 [Pipeline] // catchError 00:42:28.590 [Pipeline] } 00:42:28.609 [Pipeline] // wrap 00:42:28.617 [Pipeline] } 00:42:28.634 [Pipeline] // catchError 00:42:28.645 [Pipeline] stage 00:42:28.647 [Pipeline] { (Epilogue) 00:42:28.664 [Pipeline] catchError 00:42:28.667 [Pipeline] { 00:42:28.683 [Pipeline] echo 00:42:28.685 Cleanup processes 00:42:28.691 [Pipeline] sh 00:42:28.984 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:42:28.985 2441204 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:42:28.985 2441645 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:42:29.000 [Pipeline] sh 00:42:29.333 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:42:29.334 ++ grep -v 'sudo pgrep' 00:42:29.334 ++ awk '{print $1}' 00:42:29.334 + sudo kill -9 2441204 00:42:29.370 [Pipeline] sh 00:42:29.658 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:41.907 [Pipeline] sh 00:42:42.197 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:42.197 Artifacts sizes are good 00:42:42.213 [Pipeline] archiveArtifacts 00:42:42.221 Archiving artifacts 00:42:42.478 [Pipeline] sh 00:42:42.766 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:42:42.782 [Pipeline] cleanWs 00:42:42.793 [WS-CLEANUP] Deleting project workspace... 00:42:42.793 [WS-CLEANUP] Deferred wipeout is used... 00:42:42.801 [WS-CLEANUP] done 00:42:42.803 [Pipeline] } 00:42:42.824 [Pipeline] // catchError 00:42:42.838 [Pipeline] sh 00:42:43.127 + logger -p user.info -t JENKINS-CI 00:42:43.137 [Pipeline] } 00:42:43.156 [Pipeline] // stage 00:42:43.162 [Pipeline] } 00:42:43.179 [Pipeline] // node 00:42:43.184 [Pipeline] End of Pipeline 00:42:43.226 Finished: SUCCESS